Test Report: KVM_Linux 17824

                    
                      e73fe628963980756e0b55e8e214a727ecfdefcc:2023-12-18:32333
                    
                

Test fail (2/328)

Order failed test Duration
227 TestMultiNode/serial/RestartKeepsNodes 117.17
228 TestMultiNode/serial/DeleteNode 3.26
x
+
TestMultiNode/serial/RestartKeepsNodes (117.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107476
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-107476
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-107476: (27.862009596s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107476 --wait=true -v=8 --alsologtostderr
E1218 11:52:48.660709  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:52:56.316382  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:53:24.001005  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:53:59.418584  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-107476 --wait=true -v=8 --alsologtostderr: exit status 90 (1m26.576526513s)

                                                
                                                
-- stdout --
	* [multinode-107476] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-107476 in cluster multinode-107476
	* Restarting existing kvm2 VM for "multinode-107476" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-107476-m02 in cluster multinode-107476
	* Restarting existing kvm2 VM for "multinode-107476-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.124
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:52:43.588877  706399 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:52:43.589039  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589053  706399 out.go:309] Setting ErrFile to fd 2...
	I1218 11:52:43.589061  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589245  706399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:52:43.589801  706399 out.go:303] Setting JSON to false
	I1218 11:52:43.590759  706399 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12910,"bootTime":1702887454,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:52:43.590822  706399 start.go:138] virtualization: kvm guest
	I1218 11:52:43.593457  706399 out.go:177] * [multinode-107476] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:52:43.595324  706399 notify.go:220] Checking for updates...
	I1218 11:52:43.595332  706399 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:52:43.597000  706399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:52:43.598742  706399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:52:43.600311  706399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:52:43.601844  706399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 11:52:43.603279  706399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:52:43.605238  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:52:43.605343  706399 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:52:43.605808  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.605854  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.620145  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I1218 11:52:43.620579  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.621112  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.621138  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.621497  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.621692  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.657009  706399 out.go:177] * Using the kvm2 driver based on existing profile
	I1218 11:52:43.658657  706399 start.go:298] selected driver: kvm2
	I1218 11:52:43.658673  706399 start.go:902] validating driver "kvm2" against &{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.658875  706399 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:52:43.659246  706399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.659332  706399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:52:43.674156  706399 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:52:43.674836  706399 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 11:52:43.674935  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:52:43.674959  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:52:43.674972  706399 start_flags.go:323] config:
	{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false ist
io-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.675263  706399 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.677310  706399 out.go:177] * Starting control plane node multinode-107476 in cluster multinode-107476
	I1218 11:52:43.678882  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:52:43.678926  706399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:52:43.678945  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:52:43.679040  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:52:43.679053  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:52:43.679182  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:52:43.679387  706399 start.go:365] acquiring machines lock for multinode-107476: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:52:43.679439  706399 start.go:369] acquired machines lock for "multinode-107476" in 30.186µs
	I1218 11:52:43.679462  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:52:43.679473  706399 fix.go:54] fixHost starting: 
	I1218 11:52:43.679818  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.679872  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.693824  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1218 11:52:43.694215  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.694677  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.694699  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.695098  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.695284  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.695482  706399 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:52:43.697182  706399 fix.go:102] recreateIfNeeded on multinode-107476: state=Stopped err=<nil>
	I1218 11:52:43.697205  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	W1218 11:52:43.697378  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:52:43.699486  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476" ...
	I1218 11:52:43.701188  706399 main.go:141] libmachine: (multinode-107476) Calling .Start
	I1218 11:52:43.701381  706399 main.go:141] libmachine: (multinode-107476) Ensuring networks are active...
	I1218 11:52:43.702137  706399 main.go:141] libmachine: (multinode-107476) Ensuring network default is active
	I1218 11:52:43.702575  706399 main.go:141] libmachine: (multinode-107476) Ensuring network mk-multinode-107476 is active
	I1218 11:52:43.702882  706399 main.go:141] libmachine: (multinode-107476) Getting domain xml...
	I1218 11:52:43.703479  706399 main.go:141] libmachine: (multinode-107476) Creating domain...
	I1218 11:52:44.937955  706399 main.go:141] libmachine: (multinode-107476) Waiting to get IP...
	I1218 11:52:44.939039  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:44.939474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:44.939585  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:44.939441  706428 retry.go:31] will retry after 295.497233ms: waiting for machine to come up
	I1218 11:52:45.237103  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.237598  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.237650  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.237528  706428 retry.go:31] will retry after 241.852686ms: waiting for machine to come up
	I1218 11:52:45.481091  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.481474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.481504  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.481425  706428 retry.go:31] will retry after 405.008398ms: waiting for machine to come up
	I1218 11:52:45.887993  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.888530  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.888561  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.888436  706428 retry.go:31] will retry after 596.878679ms: waiting for machine to come up
	I1218 11:52:46.487207  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.487686  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.487723  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.487646  706428 retry.go:31] will retry after 479.661609ms: waiting for machine to come up
	I1218 11:52:46.969331  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.969779  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.969813  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.969718  706428 retry.go:31] will retry after 695.785621ms: waiting for machine to come up
	I1218 11:52:47.666484  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:47.666895  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:47.666928  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:47.666826  706428 retry.go:31] will retry after 798.848059ms: waiting for machine to come up
	I1218 11:52:48.466719  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:48.467146  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:48.467178  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:48.467086  706428 retry.go:31] will retry after 1.485767878s: waiting for machine to come up
	I1218 11:52:49.954305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:49.954699  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:49.954749  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:49.954654  706428 retry.go:31] will retry after 1.819619299s: waiting for machine to come up
	I1218 11:52:51.776607  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:51.776992  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:51.777016  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:51.776952  706428 retry.go:31] will retry after 2.317000445s: waiting for machine to come up
	I1218 11:52:54.096025  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:54.096436  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:54.096462  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:54.096372  706428 retry.go:31] will retry after 2.107748825s: waiting for machine to come up
	I1218 11:52:56.206568  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:56.206940  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:56.206971  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:56.206886  706428 retry.go:31] will retry after 2.701224561s: waiting for machine to come up
	I1218 11:52:58.909780  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:58.910163  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:58.910194  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:58.910118  706428 retry.go:31] will retry after 4.332174915s: waiting for machine to come up
	I1218 11:53:03.247678  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248150  706399 main.go:141] libmachine: (multinode-107476) Found IP for machine: 192.168.39.124
	I1218 11:53:03.248181  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has current primary IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248192  706399 main.go:141] libmachine: (multinode-107476) Reserving static IP address...
	I1218 11:53:03.248681  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.248710  706399 main.go:141] libmachine: (multinode-107476) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"}
	I1218 11:53:03.248725  706399 main.go:141] libmachine: (multinode-107476) Reserved static IP address: 192.168.39.124
	I1218 11:53:03.248735  706399 main.go:141] libmachine: (multinode-107476) DBG | Getting to WaitForSSH function...
	I1218 11:53:03.248752  706399 main.go:141] libmachine: (multinode-107476) Waiting for SSH to be available...
	I1218 11:53:03.250850  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251272  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.251305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251380  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH client type: external
	I1218 11:53:03.251431  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa (-rw-------)
	I1218 11:53:03.251495  706399 main.go:141] libmachine: (multinode-107476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:53:03.251518  706399 main.go:141] libmachine: (multinode-107476) DBG | About to run SSH command:
	I1218 11:53:03.251537  706399 main.go:141] libmachine: (multinode-107476) DBG | exit 0
	I1218 11:53:03.347693  706399 main.go:141] libmachine: (multinode-107476) DBG | SSH cmd err, output: <nil>: 
	I1218 11:53:03.348069  706399 main.go:141] libmachine: (multinode-107476) Calling .GetConfigRaw
	I1218 11:53:03.348923  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.351464  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.351874  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.351906  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.352189  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:03.352408  706399 machine.go:88] provisioning docker machine ...
	I1218 11:53:03.352426  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.352628  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.352841  706399 buildroot.go:166] provisioning hostname "multinode-107476"
	I1218 11:53:03.352861  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.353044  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.355260  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355633  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.355665  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355775  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.355965  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356114  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356209  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.356327  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.356684  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.356702  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476 && echo "multinode-107476" | sudo tee /etc/hostname
	I1218 11:53:03.495478  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476
	
	I1218 11:53:03.495519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.498288  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.498747  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.498802  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.499026  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.499258  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499423  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499560  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.499796  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.500102  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.500118  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:53:03.636275  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:53:03.636312  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:53:03.636332  706399 buildroot.go:174] setting up certificates
	I1218 11:53:03.636351  706399 provision.go:83] configureAuth start
	I1218 11:53:03.636370  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.636693  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.639303  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639759  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.639801  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639935  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.641968  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642455  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.642483  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642629  706399 provision.go:138] copyHostCerts
	I1218 11:53:03.642664  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642722  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:53:03.642737  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642819  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:53:03.642933  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.642958  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:53:03.642970  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.643012  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:53:03.643087  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643118  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:53:03.643123  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643155  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:53:03.643235  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476 san=[192.168.39.124 192.168.39.124 localhost 127.0.0.1 minikube multinode-107476]
	I1218 11:53:03.728895  706399 provision.go:172] copyRemoteCerts
	I1218 11:53:03.728965  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:53:03.728993  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.732532  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733011  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.733057  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733166  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.733459  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.733658  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.733825  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:03.829438  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:53:03.829540  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:53:03.851440  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:53:03.851526  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 11:53:03.872997  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:53:03.873064  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 11:53:03.894126  706399 provision.go:86] duration metric: configureAuth took 257.762653ms
	I1218 11:53:03.894171  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:53:03.894430  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:03.894459  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.894777  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.897379  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897774  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.897800  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897918  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.898164  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898354  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.898720  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.899054  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.899067  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:53:04.029431  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:53:04.029454  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:53:04.029610  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:53:04.029643  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.032284  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032632  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.032657  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032884  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.033092  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033244  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033356  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.033497  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.033807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.033872  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:53:04.172200  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:53:04.172259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.175231  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175567  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.175603  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175767  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.175973  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176163  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176296  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.176471  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.176900  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.176921  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:53:05.124159  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:53:05.124189  706399 machine.go:91] provisioned docker machine in 1.771768968s
	I1218 11:53:05.124202  706399 start.go:300] post-start starting for "multinode-107476" (driver="kvm2")
	I1218 11:53:05.124213  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:53:05.124248  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.124618  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:53:05.124659  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.127177  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127511  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.127543  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.128019  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.128232  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.128365  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.221325  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:53:05.225431  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:53:05.225452  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:53:05.225458  706399 command_runner.go:130] > ID=buildroot
	I1218 11:53:05.225465  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:53:05.225470  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:53:05.225498  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:53:05.225513  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:53:05.225581  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:53:05.225689  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:53:05.225707  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:53:05.225825  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:53:05.234060  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:05.256308  706399 start.go:303] post-start completed in 132.091269ms
	I1218 11:53:05.256346  706399 fix.go:56] fixHost completed within 21.576872921s
	I1218 11:53:05.256378  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.259066  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259438  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.259467  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259594  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.259822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260000  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260132  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.260300  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:05.260663  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:05.260677  706399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 11:53:05.388710  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900385.336515708
	
	I1218 11:53:05.388739  706399 fix.go:206] guest clock: 1702900385.336515708
	I1218 11:53:05.388748  706399 fix.go:219] Guest: 2023-12-18 11:53:05.336515708 +0000 UTC Remote: 2023-12-18 11:53:05.256351307 +0000 UTC m=+21.719709962 (delta=80.164401ms)
	I1218 11:53:05.388776  706399 fix.go:190] guest clock delta is within tolerance: 80.164401ms
	I1218 11:53:05.388781  706399 start.go:83] releasing machines lock for "multinode-107476", held for 21.709329749s
	I1218 11:53:05.388800  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.389070  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:05.391842  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392255  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.392297  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392448  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.392945  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393126  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393230  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:53:05.393297  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.393344  706399 ssh_runner.go:195] Run: cat /version.json
	I1218 11:53:05.393374  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.396053  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396366  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396390  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396415  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396575  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.396796  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.396908  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396935  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396951  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397108  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.397138  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.397245  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.397399  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397526  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.484417  706399 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 11:53:05.484584  706399 ssh_runner.go:195] Run: systemctl --version
	I1218 11:53:05.515488  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:53:05.515582  706399 command_runner.go:130] > systemd 247 (247)
	I1218 11:53:05.515612  706399 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 11:53:05.515721  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:53:05.522226  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:53:05.522290  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:53:05.522345  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:53:05.538265  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:53:05.538337  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:53:05.538357  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.538518  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.556555  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:53:05.556669  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:53:05.566263  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:53:05.575359  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:53:05.575428  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:53:05.584526  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.593691  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:53:05.602941  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.612320  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:53:05.621674  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:53:05.630899  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:53:05.639775  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:53:05.640003  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:53:05.648244  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:05.747265  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:53:05.764104  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.764197  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:53:05.781204  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:53:05.781232  706399 command_runner.go:130] > [Unit]
	I1218 11:53:05.781238  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:53:05.781249  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:53:05.781255  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:53:05.781260  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:53:05.781269  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:53:05.781273  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:53:05.781277  706399 command_runner.go:130] > [Service]
	I1218 11:53:05.781283  706399 command_runner.go:130] > Type=notify
	I1218 11:53:05.781287  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:53:05.781294  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:53:05.781305  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:53:05.781312  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:53:05.781321  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:53:05.781332  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:53:05.781338  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:53:05.781348  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:53:05.781360  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:53:05.781374  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:53:05.781380  706399 command_runner.go:130] > ExecStart=
	I1218 11:53:05.781395  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:53:05.781406  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:53:05.781420  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:53:05.781437  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:53:05.781448  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:53:05.781457  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:53:05.781466  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:53:05.781478  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:53:05.781489  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:53:05.781503  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:53:05.781510  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:53:05.781518  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:53:05.781524  706399 command_runner.go:130] > Delegate=yes
	I1218 11:53:05.781533  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:53:05.781540  706399 command_runner.go:130] > KillMode=process
	I1218 11:53:05.781546  706399 command_runner.go:130] > [Install]
	I1218 11:53:05.781565  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:53:05.781637  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.804433  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:53:05.824109  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.835893  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.847147  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:53:05.877224  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.889672  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.907426  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:53:05.907507  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:53:05.910712  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:53:05.911118  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:53:05.919164  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:53:05.935395  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:53:06.037158  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:53:06.143405  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:53:06.143544  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:53:06.160341  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:06.269342  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:53:07.733823  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.464413724s)
	I1218 11:53:07.733899  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:07.833594  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:53:07.945199  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:08.049248  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.158198  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:53:08.174701  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.276820  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 11:53:08.358434  706399 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 11:53:08.358505  706399 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 11:53:08.364441  706399 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1218 11:53:08.364463  706399 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 11:53:08.364470  706399 command_runner.go:130] > Device: 16h/22d	Inode: 833         Links: 1
	I1218 11:53:08.364476  706399 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1218 11:53:08.364488  706399 command_runner.go:130] > Access: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364496  706399 command_runner.go:130] > Modify: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364506  706399 command_runner.go:130] > Change: 2023-12-18 11:53:08.240952217 +0000
	I1218 11:53:08.364516  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:08.364858  706399 start.go:543] Will wait 60s for crictl version
	I1218 11:53:08.364931  706399 ssh_runner.go:195] Run: which crictl
	I1218 11:53:08.368876  706399 command_runner.go:130] > /usr/bin/crictl
	I1218 11:53:08.369038  706399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 11:53:08.420803  706399 command_runner.go:130] > Version:  0.1.0
	I1218 11:53:08.420827  706399 command_runner.go:130] > RuntimeName:  docker
	I1218 11:53:08.420831  706399 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1218 11:53:08.420836  706399 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 11:53:08.420859  706399 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 11:53:08.420916  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.449342  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.450610  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.475832  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.478214  706399 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 11:53:08.478259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:08.481071  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481405  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:08.481434  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481669  706399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1218 11:53:08.485727  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.498500  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:08.498560  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.517432  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.517456  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.517461  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.517467  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.517472  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.517479  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.517488  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.517493  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.517498  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.517502  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.518427  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.518444  706399 docker.go:601] Images already preloaded, skipping extraction
	I1218 11:53:08.518497  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.540045  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.540071  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.540079  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.540103  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.540112  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.540125  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.540143  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.540151  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.540160  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.540172  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.540915  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.540940  706399 cache_images.go:84] Images are preloaded, skipping loading
	I1218 11:53:08.541003  706399 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 11:53:08.570799  706399 command_runner.go:130] > cgroupfs
	I1218 11:53:08.570938  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:08.570956  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:08.570983  706399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 11:53:08.571015  706399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107476 NodeName:multinode-107476 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 11:53:08.571172  706399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-107476"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 11:53:08.571284  706399 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-107476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 11:53:08.571354  706399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 11:53:08.580283  706399 command_runner.go:130] > kubeadm
	I1218 11:53:08.580300  706399 command_runner.go:130] > kubectl
	I1218 11:53:08.580304  706399 command_runner.go:130] > kubelet
	I1218 11:53:08.580321  706399 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 11:53:08.580377  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 11:53:08.588532  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1218 11:53:08.604728  706399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 11:53:08.620425  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1218 11:53:08.636780  706399 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1218 11:53:08.640548  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.652739  706399 certs.go:56] Setting up /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476 for IP: 192.168.39.124
	I1218 11:53:08.652776  706399 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1aed956519f14c4fcaee2b34a279c90e2b05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:08.652956  706399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key
	I1218 11:53:08.653001  706399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key
	I1218 11:53:08.653075  706399 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key
	I1218 11:53:08.653122  706399 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key.9675f833
	I1218 11:53:08.653155  706399 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key
	I1218 11:53:08.653165  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 11:53:08.653181  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 11:53:08.653193  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 11:53:08.653201  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 11:53:08.653213  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 11:53:08.653222  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 11:53:08.653233  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 11:53:08.653244  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 11:53:08.653292  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem (1338 bytes)
	W1218 11:53:08.653316  706399 certs.go:433] ignoring /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739_empty.pem, impossibly tiny 0 bytes
	I1218 11:53:08.653332  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 11:53:08.653359  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem (1082 bytes)
	I1218 11:53:08.653383  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem (1123 bytes)
	I1218 11:53:08.653409  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem (1679 bytes)
	I1218 11:53:08.653448  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:08.653474  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.653489  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.653501  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem -> /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.654088  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 11:53:08.677424  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 11:53:08.700082  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 11:53:08.722631  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 11:53:08.744711  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 11:53:08.766872  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 11:53:08.789385  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 11:53:08.812077  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 11:53:08.834610  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /usr/share/ca-certificates/6907392.pem (1708 bytes)
	I1218 11:53:08.857333  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 11:53:08.879344  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem --> /usr/share/ca-certificates/690739.pem (1338 bytes)
	I1218 11:53:08.901384  706399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 11:53:08.916780  706399 ssh_runner.go:195] Run: openssl version
	I1218 11:53:08.922282  706399 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1218 11:53:08.922341  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907392.pem && ln -fs /usr/share/ca-certificates/6907392.pem /etc/ssl/certs/6907392.pem"
	I1218 11:53:08.931642  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935749  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935958  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.936017  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.941156  706399 command_runner.go:130] > 3ec20f2e
	I1218 11:53:08.941471  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6907392.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 11:53:08.950462  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 11:53:08.959471  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963656  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963960  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.964002  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.969248  706399 command_runner.go:130] > b5213941
	I1218 11:53:08.969314  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 11:53:08.978275  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/690739.pem && ln -fs /usr/share/ca-certificates/690739.pem /etc/ssl/certs/690739.pem"
	I1218 11:53:08.987435  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991559  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991833  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991883  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.997219  706399 command_runner.go:130] > 51391683
	I1218 11:53:08.997300  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/690739.pem /etc/ssl/certs/51391683.0"
	I1218 11:53:09.007519  706399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 11:53:09.011748  706399 command_runner.go:130] > ca.crt
	I1218 11:53:09.011764  706399 command_runner.go:130] > ca.key
	I1218 11:53:09.011769  706399 command_runner.go:130] > healthcheck-client.crt
	I1218 11:53:09.011773  706399 command_runner.go:130] > healthcheck-client.key
	I1218 11:53:09.011778  706399 command_runner.go:130] > peer.crt
	I1218 11:53:09.011782  706399 command_runner.go:130] > peer.key
	I1218 11:53:09.011786  706399 command_runner.go:130] > server.crt
	I1218 11:53:09.011793  706399 command_runner.go:130] > server.key
	I1218 11:53:09.011883  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 11:53:09.017731  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.017835  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 11:53:09.023186  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.023240  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 11:53:09.028589  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.028641  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 11:53:09.033905  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.033983  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 11:53:09.039296  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.039520  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 11:53:09.044713  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.044770  706399 kubeadm.go:404] StartCluster: {Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:53:09.044901  706399 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:09.063644  706399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 11:53:09.072501  706399 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 11:53:09.072518  706399 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 11:53:09.072524  706399 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 11:53:09.072529  706399 command_runner.go:130] > member
	I1218 11:53:09.072549  706399 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1218 11:53:09.072562  706399 kubeadm.go:636] restartCluster start
	I1218 11:53:09.072621  706399 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 11:53:09.080707  706399 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.081213  706399 kubeconfig.go:135] verify returned: extract IP: "multinode-107476" does not appear in /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.081366  706399 kubeconfig.go:146] "multinode-107476" context is missing from /home/jenkins/minikube-integration/17824-683489/kubeconfig - will repair!
	I1218 11:53:09.081646  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:09.082090  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.082328  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:09.082929  706399 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 11:53:09.083156  706399 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 11:53:09.090938  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.090982  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.101227  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.591919  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.592030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.603387  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.091928  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.092030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.103288  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.591906  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.592032  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.602954  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.091515  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.091641  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.103090  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.591669  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.591804  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.603393  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.092006  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.092105  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.103893  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.591441  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.591518  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.602651  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.091237  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.091369  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.103118  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.590973  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.592383  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.603723  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.091222  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.091346  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.102533  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.591068  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.591166  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.602318  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.091932  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.092046  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.103581  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.591099  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.591204  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.602422  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.091999  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.092095  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.103457  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.591070  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.591174  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.602679  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.091238  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.091370  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.103125  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.591667  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.591745  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.602974  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.091582  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.091718  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.103155  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.591946  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.592225  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.603460  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.091322  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:19.091400  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:19.102630  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.102658  706399 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1218 11:53:19.102668  706399 kubeadm.go:1135] stopping kube-system containers ...
	I1218 11:53:19.102726  706399 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:19.126882  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.126909  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.126915  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.126921  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.126928  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.126934  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.126939  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.126946  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.126952  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.126961  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.126966  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.126975  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.126982  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.126996  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.127005  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.127012  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.127994  706399 docker.go:469] Stopping containers: [8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992]
	I1218 11:53:19.128071  706399 ssh_runner.go:195] Run: docker stop 8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992
	I1218 11:53:19.146845  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.146887  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.146894  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.148422  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.148444  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.148709  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.148746  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.150621  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.150979  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.150995  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.151009  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.151182  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.151421  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.151682  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.151693  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.151697  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.152748  706399 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 11:53:19.167208  706399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 11:53:19.175617  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 11:53:19.175659  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 11:53:19.175670  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 11:53:19.175682  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175764  706399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175829  706399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184086  706399 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184108  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.290255  706399 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 11:53:19.290616  706399 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 11:53:19.291271  706399 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 11:53:19.291767  706399 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 11:53:19.292523  706399 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1218 11:53:19.293290  706399 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1218 11:53:19.294173  706399 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1218 11:53:19.294659  706399 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1218 11:53:19.295268  706399 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1218 11:53:19.295750  706399 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 11:53:19.296399  706399 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 11:53:19.297138  706399 command_runner.go:130] > [certs] Using the existing "sa" key
	I1218 11:53:19.298557  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.350785  706399 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 11:53:19.458190  706399 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 11:53:19.753510  706399 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 11:53:19.917725  706399 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 11:53:20.041823  706399 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 11:53:20.044334  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.111720  706399 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 11:53:20.113879  706399 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 11:53:20.113900  706399 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 11:53:20.233250  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.333464  706399 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 11:53:20.333508  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 11:53:20.333519  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 11:53:20.333529  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 11:53:20.333603  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.388000  706399 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 11:53:20.403526  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:20.403632  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:20.904600  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.403801  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.904580  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.403835  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.903754  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.917660  706399 command_runner.go:130] > 1729
	I1218 11:53:22.922833  706399 api_server.go:72] duration metric: took 2.519305176s to wait for apiserver process to appear ...
	I1218 11:53:22.922860  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:22.922886  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:22.923542  706399 api_server.go:269] stopped: https://192.168.39.124:8443/healthz: Get "https://192.168.39.124:8443/healthz": dial tcp 192.168.39.124:8443: connect: connection refused
	I1218 11:53:23.423182  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.843152  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1218 11:53:25.843187  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1218 11:53:25.843205  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.909873  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.909925  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:25.922999  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.929359  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.929386  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.422960  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.428892  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.428928  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.923578  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.931290  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.931325  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:27.423966  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:27.429135  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:27.429243  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:27.429252  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:27.429261  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:27.429267  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:27.437137  706399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1218 11:53:27.437163  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:27.437172  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:27.437179  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:27.437187  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:27 GMT
	I1218 11:53:27.437194  706399 round_trippers.go:580]     Audit-Id: e12ea9f6-c15b-4448-831c-e69c87f78e83
	I1218 11:53:27.437211  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:27.437223  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:27.437234  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:27.437262  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:27.437348  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:27.437371  706399 api_server.go:131] duration metric: took 4.514501797s to wait for apiserver health ...
	I1218 11:53:27.437384  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:27.437394  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:27.439521  706399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 11:53:27.441036  706399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 11:53:27.450911  706399 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 11:53:27.450934  706399 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1218 11:53:27.450953  706399 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1218 11:53:27.450964  706399 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 11:53:27.450981  706399 command_runner.go:130] > Access: 2023-12-18 11:52:56.552952217 +0000
	I1218 11:53:27.450993  706399 command_runner.go:130] > Modify: 2023-12-13 23:27:31.000000000 +0000
	I1218 11:53:27.451003  706399 command_runner.go:130] > Change: 2023-12-18 11:52:54.793952217 +0000
	I1218 11:53:27.451013  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:27.458216  706399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 11:53:27.458236  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 11:53:27.509185  706399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 11:53:28.905245  706399 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.912521  706399 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.916523  706399 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1218 11:53:28.934945  706399 command_runner.go:130] > daemonset.apps/kindnet configured
	I1218 11:53:28.940934  706399 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.431702924s)
	I1218 11:53:28.940965  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:28.941087  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:28.941101  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.941113  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.941123  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.945051  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:28.945076  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.945086  706399 round_trippers.go:580]     Audit-Id: 6c622874-25a6-4b96-9b2e-4f49b904ff51
	I1218 11:53:28.945094  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.945102  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.945110  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.945118  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.945126  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.946529  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:28.950707  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:28.950736  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 11:53:28.950745  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1218 11:53:28.950751  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1218 11:53:28.950756  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:28.950760  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:28.950766  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1218 11:53:28.950775  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1218 11:53:28.950782  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:28.950792  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:28.950800  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1218 11:53:28.950809  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1218 11:53:28.950824  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 11:53:28.950832  706399 system_pods.go:74] duration metric: took 9.862056ms to wait for pod list to return data ...
	I1218 11:53:28.950839  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:28.950909  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:28.950918  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.950925  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.950931  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.953444  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:28.953475  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.953487  706399 round_trippers.go:580]     Audit-Id: 0d66de6b-1b8d-4012-9156-1fa20bb81935
	I1218 11:53:28.953495  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.953501  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.953508  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.953513  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.953519  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.953797  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14775 chars]
	I1218 11:53:28.954628  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954655  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954667  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954671  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954677  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954684  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954690  706399 node_conditions.go:105] duration metric: took 3.843221ms to run NodePressure ...
	I1218 11:53:28.954714  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:29.198463  706399 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 11:53:29.198489  706399 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 11:53:29.198613  706399 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1218 11:53:29.198764  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1218 11:53:29.198778  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.198790  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.198807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.202177  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.202201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.202208  706399 round_trippers.go:580]     Audit-Id: 19d0d8d5-e9c5-4d32-b655-9ad8a4c44da9
	I1218 11:53:29.202213  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.202218  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.202223  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.202228  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.202233  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.203368  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I1218 11:53:29.204464  706399 kubeadm.go:787] kubelet initialised
	I1218 11:53:29.204488  706399 kubeadm.go:788] duration metric: took 5.842944ms waiting for restarted kubelet to initialise ...
	I1218 11:53:29.204498  706399 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:29.204573  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:29.204584  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.204595  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.204613  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.208130  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.208151  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.208159  706399 round_trippers.go:580]     Audit-Id: 450b4722-b778-4d0a-aede-ee77ca9c229c
	I1218 11:53:29.208165  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.208171  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.208176  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.208181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.208208  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.209329  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:29.211875  706399 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.211970  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:29.211980  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.211991  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.212001  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.214577  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.214596  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.214603  706399 round_trippers.go:580]     Audit-Id: 9385acf6-1b01-4b3d-928c-439fe28d4f97
	I1218 11:53:29.214608  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.214613  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.214618  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.214623  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.214627  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.215229  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:29.215743  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.215765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.215776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.215783  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.217921  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.217938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.217944  706399 round_trippers.go:580]     Audit-Id: 8a8697ed-9283-4bdf-9239-28520f9f9b9f
	I1218 11:53:29.217950  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.217958  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.217968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.217977  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.217988  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.218120  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.218457  706399 pod_ready.go:97] node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218478  706399 pod_ready.go:81] duration metric: took 6.581675ms waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.218492  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218502  706399 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.218551  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:29.218558  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.218572  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.218585  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.220388  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.220404  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.220410  706399 round_trippers.go:580]     Audit-Id: 6774ec4b-7426-4031-ac00-5f3c00310f09
	I1218 11:53:29.220415  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.220420  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.220426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.220433  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.220442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.220551  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I1218 11:53:29.220938  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.220954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.220961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.220967  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.222861  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.222877  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.222886  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.222897  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.222905  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.222913  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.222925  706399 round_trippers.go:580]     Audit-Id: cdc058a1-0407-4522-ad4e-1bccaa86b8e0
	I1218 11:53:29.222934  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.223090  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.223369  706399 pod_ready.go:97] node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223386  706399 pod_ready.go:81] duration metric: took 4.874816ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.223394  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223412  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.223472  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:29.223479  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.223486  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.223496  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.225396  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.225413  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.225419  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.225425  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.225430  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.225435  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.225442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.225451  706399 round_trippers.go:580]     Audit-Id: 2464f96a-0515-46f9-8313-633c8eafb3b2
	I1218 11:53:29.225634  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"768","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I1218 11:53:29.225978  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.225994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.226001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.226006  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.227849  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.227867  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.227876  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.227884  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.227892  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.227900  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.227909  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.227916  706399 round_trippers.go:580]     Audit-Id: 8723f00d-f528-46cc-b34b-878c1dbe29bf
	I1218 11:53:29.228105  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.228354  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228374  706399 pod_ready.go:81] duration metric: took 4.951319ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.228382  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228387  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.228468  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:29.228478  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.228484  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.228490  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.234141  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:29.234160  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.234169  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.234176  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.234190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.234195  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.234201  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.234205  706399 round_trippers.go:580]     Audit-Id: e7a7e09c-4d05-4a64-917b-5e55b2c17b60
	I1218 11:53:29.234474  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"769","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I1218 11:53:29.342153  706399 request.go:629] Waited for 107.293593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342245  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342251  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.342259  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.342265  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.345014  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.345032  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.345039  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.345044  706399 round_trippers.go:580]     Audit-Id: 931285de-8f53-4e79-b792-460f413e4aff
	I1218 11:53:29.345049  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.345054  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.345059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.345068  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.345238  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.345553  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345573  706399 pod_ready.go:81] duration metric: took 117.178912ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.345582  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345593  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.542039  706399 request.go:629] Waited for 196.361004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542142  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542147  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.542156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.542162  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.544982  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.545002  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.545009  706399 round_trippers.go:580]     Audit-Id: e1d63858-4541-4ccd-a4da-08fd054a97e6
	I1218 11:53:29.545017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.545025  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.545033  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.545042  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.545058  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.545244  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:29.741997  706399 request.go:629] Waited for 196.344122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742076  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742082  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.742093  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.742117  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.744705  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.744733  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.744743  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.744751  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.744759  706399 round_trippers.go:580]     Audit-Id: eb4d544e-890a-4cf6-8b49-17e1c66fedd1
	I1218 11:53:29.744766  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.744775  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.744785  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.744985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:29.745330  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:29.745356  706399 pod_ready.go:81] duration metric: took 399.751355ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.745369  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.941544  706399 request.go:629] Waited for 196.09241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941631  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941639  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.941653  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.941664  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.944619  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.944641  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.944649  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.944654  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.944659  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.944665  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.944670  706399 round_trippers.go:580]     Audit-Id: 53295520-6dfc-40b0-aa42-f14c320fd991
	I1218 11:53:29.944675  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.945395  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:30.141176  706399 request.go:629] Waited for 195.305381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141277  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.141288  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.141294  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.144266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.144293  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.144304  706399 round_trippers.go:580]     Audit-Id: db750d14-63e9-423b-9181-601ba7e56368
	I1218 11:53:30.144313  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.144321  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.144328  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.144335  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.144342  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.144508  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:30.144910  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:30.144938  706399 pod_ready.go:81] duration metric: took 399.556805ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.144951  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.341891  706399 request.go:629] Waited for 196.832639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341974  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341981  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.341989  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.341996  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.344936  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.344960  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.344969  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.344976  706399 round_trippers.go:580]     Audit-Id: 6d7e686b-0932-465f-b25e-09aeb30d81ad
	I1218 11:53:30.344983  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.344990  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.344998  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.345005  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.345247  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"772","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1218 11:53:30.542107  706399 request.go:629] Waited for 196.385627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542172  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542176  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.542202  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.542210  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.545091  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.545113  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.545121  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.545130  706399 round_trippers.go:580]     Audit-Id: c21950d0-952e-42f1-995c-f068b90f04c0
	I1218 11:53:30.545138  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.545145  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.545153  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.545164  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.545578  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.545899  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545916  706399 pod_ready.go:81] duration metric: took 400.958711ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.545925  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545935  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.741971  706399 request.go:629] Waited for 195.944564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742047  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742052  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.742062  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.742069  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.745047  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.745075  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.745084  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.745092  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.745105  706399 round_trippers.go:580]     Audit-Id: 588c2353-9d7d-488b-a950-87bf03ba3da0
	I1218 11:53:30.745115  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.745122  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.745130  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.745381  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"770","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I1218 11:53:30.941089  706399 request.go:629] Waited for 195.312312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941185  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941199  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.941210  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.941216  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.944408  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.944434  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.944445  706399 round_trippers.go:580]     Audit-Id: 7a4702e0-308a-4d75-b115-eb14716b6830
	I1218 11:53:30.944453  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.944462  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.944474  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.944486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.944497  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.944675  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.945060  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945088  706399 pod_ready.go:81] duration metric: took 399.145466ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.945102  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945113  706399 pod_ready.go:38] duration metric: took 1.740603836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:30.945134  706399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 11:53:30.975551  706399 command_runner.go:130] > -16
	I1218 11:53:30.975760  706399 ops.go:34] apiserver oom_adj: -16
	I1218 11:53:30.975788  706399 kubeadm.go:640] restartCluster took 21.903211868s
	I1218 11:53:30.975799  706399 kubeadm.go:406] StartCluster complete in 21.931036061s
	I1218 11:53:30.975823  706399 settings.go:142] acquiring lock: {Name:mk1b55e0e8c256c6bc60d3bea159645d01ed78f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.975910  706399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.976662  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.976915  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 11:53:30.976953  706399 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 11:53:30.980045  706399 out.go:177] * Enabled addons: 
	I1218 11:53:30.977197  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:30.977270  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.981684  706399 addons.go:502] enable addons completed in 4.7055ms: enabled=[]
	I1218 11:53:30.982005  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:30.982452  706399 round_trippers.go:463] GET https://192.168.39.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 11:53:30.982466  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.982478  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.982487  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.985560  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.985590  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.985598  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.985604  706399 round_trippers.go:580]     Audit-Id: 733c0867-ba1a-4681-b566-8abcfe50d689
	I1218 11:53:30.985613  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.985627  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.985638  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.985644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.985652  706399 round_trippers.go:580]     Content-Length: 291
	I1218 11:53:30.985680  706399 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3f9d4717-a78b-4c7e-9f95-6ab3b5581a7f","resourceVersion":"778","creationTimestamp":"2023-12-18T11:49:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 11:53:30.985863  706399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107476" context rescaled to 1 replicas
	I1218 11:53:30.985895  706399 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 11:53:30.987695  706399 out.go:177] * Verifying Kubernetes components...
	I1218 11:53:30.989853  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:31.158210  706399 command_runner.go:130] > apiVersion: v1
	I1218 11:53:31.158232  706399 command_runner.go:130] > data:
	I1218 11:53:31.158237  706399 command_runner.go:130] >   Corefile: |
	I1218 11:53:31.158243  706399 command_runner.go:130] >     .:53 {
	I1218 11:53:31.158250  706399 command_runner.go:130] >         log
	I1218 11:53:31.158263  706399 command_runner.go:130] >         errors
	I1218 11:53:31.158271  706399 command_runner.go:130] >         health {
	I1218 11:53:31.158287  706399 command_runner.go:130] >            lameduck 5s
	I1218 11:53:31.158292  706399 command_runner.go:130] >         }
	I1218 11:53:31.158300  706399 command_runner.go:130] >         ready
	I1218 11:53:31.158309  706399 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 11:53:31.158313  706399 command_runner.go:130] >            pods insecure
	I1218 11:53:31.158325  706399 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 11:53:31.158335  706399 command_runner.go:130] >            ttl 30
	I1218 11:53:31.158342  706399 command_runner.go:130] >         }
	I1218 11:53:31.158352  706399 command_runner.go:130] >         prometheus :9153
	I1218 11:53:31.158360  706399 command_runner.go:130] >         hosts {
	I1218 11:53:31.158374  706399 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1218 11:53:31.158384  706399 command_runner.go:130] >            fallthrough
	I1218 11:53:31.158390  706399 command_runner.go:130] >         }
	I1218 11:53:31.158397  706399 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 11:53:31.158404  706399 command_runner.go:130] >            max_concurrent 1000
	I1218 11:53:31.158411  706399 command_runner.go:130] >         }
	I1218 11:53:31.158418  706399 command_runner.go:130] >         cache 30
	I1218 11:53:31.158434  706399 command_runner.go:130] >         loop
	I1218 11:53:31.158444  706399 command_runner.go:130] >         reload
	I1218 11:53:31.158453  706399 command_runner.go:130] >         loadbalance
	I1218 11:53:31.158462  706399 command_runner.go:130] >     }
	I1218 11:53:31.158472  706399 command_runner.go:130] > kind: ConfigMap
	I1218 11:53:31.158481  706399 command_runner.go:130] > metadata:
	I1218 11:53:31.158488  706399 command_runner.go:130] >   creationTimestamp: "2023-12-18T11:49:16Z"
	I1218 11:53:31.158492  706399 command_runner.go:130] >   name: coredns
	I1218 11:53:31.158498  706399 command_runner.go:130] >   namespace: kube-system
	I1218 11:53:31.158509  706399 command_runner.go:130] >   resourceVersion: "396"
	I1218 11:53:31.158517  706399 command_runner.go:130] >   uid: 9e09d417-7d67-4099-aeea-880a5f122cec
	I1218 11:53:31.161286  706399 node_ready.go:35] waiting up to 6m0s for node "multinode-107476" to be "Ready" ...
	I1218 11:53:31.161454  706399 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1218 11:53:31.161506  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.161526  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.161538  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.161551  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.164076  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:31.164092  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.164099  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.164104  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.164109  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.164114  706399 round_trippers.go:580]     Audit-Id: a1e2309c-5203-41c1-bdff-38bf4aa1b0e4
	I1218 11:53:31.164119  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.164124  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.164299  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:31.661958  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.661994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.662005  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.662014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.665299  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:31.665326  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.665337  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.665345  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.665354  706399 round_trippers.go:580]     Audit-Id: 715c021b-232b-46db-b224-0ee0e1d87bd0
	I1218 11:53:31.665364  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.665372  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.665383  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.665557  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.162261  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.162294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.162318  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.162328  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.165415  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:32.165445  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.165456  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.165465  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.165473  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.165480  706399 round_trippers.go:580]     Audit-Id: 8178c82f-f5df-4946-829a-8d607bef70f1
	I1218 11:53:32.165487  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.165494  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.165662  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.662421  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.662459  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.662472  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.662482  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.665000  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:32.665024  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.665031  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.665036  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.665044  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.665050  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.665055  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.665063  706399 round_trippers.go:580]     Audit-Id: acbd11c7-43ce-4b9c-970b-6cfe7595d19b
	I1218 11:53:32.665272  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.161915  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.161951  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.161964  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.161973  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.164679  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.164707  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.164718  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.164727  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.164734  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.164742  706399 round_trippers.go:580]     Audit-Id: 803dab92-ad10-4d1e-9c2c-02e13845c977
	I1218 11:53:33.164754  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.164761  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.164950  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.165364  706399 node_ready.go:58] node "multinode-107476" has status "Ready":"False"
	I1218 11:53:33.661704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.661729  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.661737  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.661743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.664502  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.664528  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.664537  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.664542  706399 round_trippers.go:580]     Audit-Id: 78f37d85-d255-498a-97ae-7e7ffea71734
	I1218 11:53:33.664547  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.664552  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.664558  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.664563  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.664871  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:33.665288  706399 node_ready.go:49] node "multinode-107476" has status "Ready":"True"
	I1218 11:53:33.665314  706399 node_ready.go:38] duration metric: took 2.503992718s waiting for node "multinode-107476" to be "Ready" ...
	I1218 11:53:33.665324  706399 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:33.665384  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:33.665393  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.665400  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.665406  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.668975  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:33.668992  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.668998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.669004  706399 round_trippers.go:580]     Audit-Id: 616ca18a-8e53-464b-b8f7-fdc3a26f56e2
	I1218 11:53:33.669011  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.669016  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.669021  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.669026  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.670356  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"852"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83732 chars]
	I1218 11:53:33.672899  706399 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:33.672977  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:33.672986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.672993  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.672999  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.675712  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.675728  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.675743  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.675748  706399 round_trippers.go:580]     Audit-Id: eabe770a-6bc4-4dfc-b039-991ddbcade34
	I1218 11:53:33.675755  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.675760  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.675765  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.675771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.676383  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:33.676975  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.676993  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.677001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.677007  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.678858  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:33.678876  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.678885  706399 round_trippers.go:580]     Audit-Id: f39381a7-3505-48c5-8706-62a66b7c6d74
	I1218 11:53:33.678898  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.678907  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.678913  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.678918  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.678926  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.679219  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.173545  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.173574  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.173582  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.173588  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.177792  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:34.177814  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.177821  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.177827  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.177832  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.177837  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.177842  706399 round_trippers.go:580]     Audit-Id: e0c08780-2ccb-4466-ac60-0130be0e91bb
	I1218 11:53:34.177847  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.178197  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.178858  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.178877  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.178888  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.178898  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.182714  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.182734  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.182741  706399 round_trippers.go:580]     Audit-Id: 8aad3fb3-c28c-4741-bb51-1b599fc4d9a2
	I1218 11:53:34.182746  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.182751  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.182756  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.182761  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.182766  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.183249  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.674054  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.674087  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.674102  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.674111  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.677143  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.677168  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.677175  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.677181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.677191  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.677196  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.677201  706399 round_trippers.go:580]     Audit-Id: 705cd8df-0cf3-47cc-9898-d4f3cbf27fc1
	I1218 11:53:34.677206  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.677480  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.677955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.677969  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.677977  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.677983  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.680928  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:34.680951  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.680961  706399 round_trippers.go:580]     Audit-Id: 79e80102-7689-456c-968e-8b545873dcf0
	I1218 11:53:34.680969  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.680979  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.680992  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.681003  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.681011  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.681532  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.173215  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.173248  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.173257  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.173309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.176153  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.176175  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.176183  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.176190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.176199  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.176207  706399 round_trippers.go:580]     Audit-Id: 6b84f91f-f0e3-431d-b790-7a72f221660b
	I1218 11:53:35.176218  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.176227  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.176689  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.177270  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.177287  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.177295  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.177303  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.179670  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.179698  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.179705  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.179712  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.179720  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.179728  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.179735  706399 round_trippers.go:580]     Audit-Id: 4ebee61b-cc3b-47df-a387-697134152b33
	I1218 11:53:35.179744  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.179923  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.673560  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.673590  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.673599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.673605  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.676855  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:35.676885  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.676895  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.676903  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.676910  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.676917  706399 round_trippers.go:580]     Audit-Id: 29332715-2ca7-46d2-9eae-60bcc11a611d
	I1218 11:53:35.676923  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.676931  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.677062  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.677571  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.677588  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.677599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.677610  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.680478  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.680509  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.680519  706399 round_trippers.go:580]     Audit-Id: cf7fec49-5746-4ad8-ad95-44ddd5a46a7c
	I1218 11:53:35.680528  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.680537  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.680545  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.680552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.680560  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.680765  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.681145  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:36.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.173440  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.173448  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.179050  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:36.179081  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.179092  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.179127  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.179141  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.179149  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.179161  706399 round_trippers.go:580]     Audit-Id: 2262febf-a9c1-4185-a064-37f0e57229fd
	I1218 11:53:36.179173  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.179914  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.180600  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.180626  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.180638  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.180648  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.182832  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.182851  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.182859  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.182867  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.182874  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.182881  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.182890  706399 round_trippers.go:580]     Audit-Id: 7672883b-ce34-4c88-940d-e431e9489d5d
	I1218 11:53:36.182900  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.183021  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:36.673765  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.673797  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.673809  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.673816  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.676897  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:36.676920  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.676941  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.676948  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.676956  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.676963  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.676971  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.676985  706399 round_trippers.go:580]     Audit-Id: 4ce118c9-c9cf-42f4-ad28-24e77a8f8d0b
	I1218 11:53:36.677587  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.678050  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.678064  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.678073  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.678079  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.680488  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.680504  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.680513  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.680520  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.680528  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.680542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.680558  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.680567  706399 round_trippers.go:580]     Audit-Id: 329a463d-e9eb-4a48-941f-81cfd668cb20
	I1218 11:53:36.680745  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.173387  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.173415  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.173423  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.173430  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.176760  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.176789  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.176799  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.176807  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.176814  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.176822  706399 round_trippers.go:580]     Audit-Id: ccd32a2b-22a0-4e80-891a-798ae2e74751
	I1218 11:53:37.176830  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.176841  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.177566  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.178053  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.178066  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.178074  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.178080  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.180584  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.180606  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.180616  706399 round_trippers.go:580]     Audit-Id: 6b24dd36-f6bb-4b1e-bc13-bfdc9fcb3deb
	I1218 11:53:37.180624  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.180634  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.180644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.180660  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.180673  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.181042  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.673822  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.673855  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.673864  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.673870  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.676905  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.676930  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.676937  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.676943  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.676948  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.676953  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.676958  706399 round_trippers.go:580]     Audit-Id: c843db6c-febe-472d-9c6d-2c60ae326f9c
	I1218 11:53:37.676963  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.677455  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.677995  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.678010  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.678018  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.678024  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.680442  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.680462  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.680471  706399 round_trippers.go:580]     Audit-Id: 03d35d8e-3248-4e4a-aaa4-561ea5506445
	I1218 11:53:37.680479  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.680486  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.680494  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.680506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.680514  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.680764  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.173464  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.173495  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.173504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.173510  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.177182  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:38.177207  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.177217  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.177225  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.177231  706399 round_trippers.go:580]     Audit-Id: 8f5f4bf7-c666-4e33-9c29-fb899337e95e
	I1218 11:53:38.177238  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.177245  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.177252  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.177919  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.178418  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.178436  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.178444  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.178449  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.181432  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.181453  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.181463  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.181472  706399 round_trippers.go:580]     Audit-Id: cdef1a0d-7934-417d-b867-e54c5da5c288
	I1218 11:53:38.181480  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.181488  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.181497  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.181506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.182567  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.182937  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:38.673981  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.674003  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.674014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.674021  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.676858  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.676938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.676957  706399 round_trippers.go:580]     Audit-Id: 2ef42698-8375-41e0-83e7-e39f4386e551
	I1218 11:53:38.676967  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.676976  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.676982  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.676987  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.677194  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.677739  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.677756  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.677766  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.677775  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.680079  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.680104  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.680114  706399 round_trippers.go:580]     Audit-Id: cb04d605-5990-411c-bb61-d27a16eb40e0
	I1218 11:53:38.680122  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.680127  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.680132  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.680137  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.680142  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.680303  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.173689  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.173724  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.173735  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.173743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.176928  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:39.176956  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.176966  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.176974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.176991  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.176998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.177009  706399 round_trippers.go:580]     Audit-Id: 53d7f113-e0ab-4396-97c8-fac771a70baa
	I1218 11:53:39.177017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.177158  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.177635  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.177666  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.177677  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.177687  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.180115  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.180141  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.180152  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.180160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.180166  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.180174  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.180179  706399 round_trippers.go:580]     Audit-Id: 90d34db6-74ca-42f5-81d9-8222532758aa
	I1218 11:53:39.180196  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.180432  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.674135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.674165  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.674176  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.674185  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.676939  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.676965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.676974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.676990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.677000  706399 round_trippers.go:580]     Audit-Id: 501a9bb0-a4f9-46a1-b970-b27f1660227c
	I1218 11:53:39.677005  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.677011  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.677211  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.677746  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.677765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.677776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.677784  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.680008  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.680025  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.680032  706399 round_trippers.go:580]     Audit-Id: 649faefe-95f3-4ba5-944c-2b3ac4a04840
	I1218 11:53:39.680037  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.680042  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.680047  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.680059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.680064  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.680517  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.173280  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.173318  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.173330  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.173338  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.176226  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.176252  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.176260  706399 round_trippers.go:580]     Audit-Id: dcaceef1-cb4c-409d-9795-82135569a3f0
	I1218 11:53:40.176265  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.176271  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.176276  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.176281  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.176286  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.176500  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.177135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.177154  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.177166  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.177173  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.179445  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.179459  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.179466  706399 round_trippers.go:580]     Audit-Id: 9033ba3c-1dd2-4b09-8d85-34017bc0e26d
	I1218 11:53:40.179471  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.179476  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.179480  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.179486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.179491  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.179900  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.673585  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.673616  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.673624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.673630  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.676460  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.676486  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.676496  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.676505  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.676513  706399 round_trippers.go:580]     Audit-Id: 0ac8a2dd-4ed5-431e-9228-2726aad2faf3
	I1218 11:53:40.676522  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.676532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.676542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.676681  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.677282  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.677299  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.677309  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.677322  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.679390  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.679405  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.679411  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.679417  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.679422  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.679426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.679431  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.679437  706399 round_trippers.go:580]     Audit-Id: 3088a618-2697-41b5-b81f-673ab861df2d
	I1218 11:53:40.679674  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.680074  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:41.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.173438  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.173443  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.176266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.176289  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.176300  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.176315  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.176322  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.176336  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.176349  706399 round_trippers.go:580]     Audit-Id: 53e21024-a9d5-4eca-a522-b1244059f300
	I1218 11:53:41.176356  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.177028  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.177537  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.177553  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.177561  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.177570  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.179482  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:41.179501  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.179523  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.179532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.179542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.179552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.179564  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.179575  706399 round_trippers.go:580]     Audit-Id: 237d0bff-9402-489c-822c-431b43baeb0c
	I1218 11:53:41.179806  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:41.673434  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.673475  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.673481  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.676679  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:41.676701  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.676709  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.676715  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.676720  706399 round_trippers.go:580]     Audit-Id: 5e3f13c1-c640-47db-98ab-31b91f950abc
	I1218 11:53:41.676725  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.676731  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.676736  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.677002  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.677473  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.677493  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.677504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.677512  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.679823  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.679840  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.679847  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.679852  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.679857  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.679862  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.679867  706399 round_trippers.go:580]     Audit-Id: c58e98cd-5718-47f9-b671-de3e227e7f8a
	I1218 11:53:41.679880  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.680038  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.173754  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.173792  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.173801  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.173807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.176269  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.176291  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.176307  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.176315  706399 round_trippers.go:580]     Audit-Id: f068f51e-93e6-4b4b-8a24-382d1325b363
	I1218 11:53:42.176324  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.176333  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.176343  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.176352  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.176513  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.176990  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.177006  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.177016  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.177025  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.179154  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.179173  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.179184  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.179193  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.179200  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.179208  706399 round_trippers.go:580]     Audit-Id: 6bd1c020-71a6-4a7c-b496-e507683b71a1
	I1218 11:53:42.179214  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.179219  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.179368  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.674178  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.674211  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.674219  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.674225  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.676989  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.677019  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.677030  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.677039  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.677048  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.677057  706399 round_trippers.go:580]     Audit-Id: e7f3a1b6-10ed-4499-9e1f-e736dfc275de
	I1218 11:53:42.677069  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.677077  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.677226  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.677701  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.677715  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.677722  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.677728  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.679919  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.679944  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.679952  706399 round_trippers.go:580]     Audit-Id: b58dd22e-a294-44fd-a21e-73d9d8edf70c
	I1218 11:53:42.679958  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.679963  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.679968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.679974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.679979  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.680228  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.680665  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:43.173955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.173986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.173994  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.174000  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.179521  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:43.179550  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.179561  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.179571  706399 round_trippers.go:580]     Audit-Id: 361dfd5f-b3d7-4aee-a744-f1e5be8299ab
	I1218 11:53:43.179579  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.179587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.179597  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.179605  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.179840  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.180347  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.180364  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.180371  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.180377  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.182529  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:43.182552  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.182562  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.182571  706399 round_trippers.go:580]     Audit-Id: c293cdc2-7c87-4e68-b2af-879cb905970f
	I1218 11:53:43.182578  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.182587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.182594  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.182602  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.182772  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:43.673323  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.673355  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.673366  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.673375  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.676722  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.676752  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.676762  706399 round_trippers.go:580]     Audit-Id: 94473599-4289-4510-bb2d-43ba24b179f0
	I1218 11:53:43.676770  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.676778  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.676804  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.676819  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.676832  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.677037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.677593  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.677612  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.677624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.677633  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.680695  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.680718  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.680727  706399 round_trippers.go:580]     Audit-Id: bdae868b-fd96-4f89-9ccb-5dce584f6e62
	I1218 11:53:43.680737  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.680745  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.680753  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.680770  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.680778  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.681643  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.173868  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.173892  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.173900  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.173907  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.185903  706399 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1218 11:53:44.185939  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.185949  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.185957  706399 round_trippers.go:580]     Audit-Id: 0807c889-4f55-447d-909a-ec577df47c9f
	I1218 11:53:44.185964  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.185973  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.185981  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.185990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.186217  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:44.186803  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.186821  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.186829  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.186835  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.189463  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.189484  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.189494  706399 round_trippers.go:580]     Audit-Id: d76f03a9-c756-48da-8594-aa7191476ce1
	I1218 11:53:44.189502  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.189510  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.189519  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.189527  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.189536  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.189666  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.673257  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.673294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.673303  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.673309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.678016  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.678037  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.678044  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.678061  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.678066  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.678071  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.678076  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.678082  706399 round_trippers.go:580]     Audit-Id: ded65a70-0ef7-468a-8c23-d3584306f5ce
	I1218 11:53:44.678372  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1218 11:53:44.678912  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.678929  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.678936  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.678943  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.683034  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.683059  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.683068  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.683076  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.683085  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.683103  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.683116  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.683124  706399 round_trippers.go:580]     Audit-Id: dee3a488-a4c0-429c-a3d0-763057e3e6fa
	I1218 11:53:44.683810  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.684155  706399 pod_ready.go:92] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.684175  706399 pod_ready.go:81] duration metric: took 11.01125188s waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684185  706399 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:44.684260  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.684267  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.684273  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.686236  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:44.686257  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.686282  706399 round_trippers.go:580]     Audit-Id: 57a8ca26-0ed4-4f32-a864-04c5cde44f00
	I1218 11:53:44.686294  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.686304  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.686317  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.686324  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.686334  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.686465  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"860","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1218 11:53:44.686943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.686962  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.686969  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.686975  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.689166  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.689180  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.689186  706399 round_trippers.go:580]     Audit-Id: cf29250f-3957-4111-b39c-e51f822d2956
	I1218 11:53:44.689192  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.689196  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.689201  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.689206  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.689214  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.689316  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.689596  706399 pod_ready.go:92] pod "etcd-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.689612  706399 pod_ready.go:81] duration metric: took 5.418084ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689626  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689687  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:44.689696  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.689702  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.689708  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.692944  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:44.692965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.692974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.692983  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.692991  706399 round_trippers.go:580]     Audit-Id: 6c7bd0e3-c0dc-4d2f-8958-13828542872b
	I1218 11:53:44.692999  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.693007  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.693017  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.693306  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"856","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1218 11:53:44.693815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.693830  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.693837  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.693842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.696806  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.696825  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.696832  706399 round_trippers.go:580]     Audit-Id: 0bdd9f51-a776-465a-8a9e-1430d9ca51e2
	I1218 11:53:44.696837  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.696842  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.696846  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.696851  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.696856  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.697133  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.697438  706399 pod_ready.go:92] pod "kube-apiserver-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.697454  706399 pod_ready.go:81] duration metric: took 7.821649ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697463  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697538  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:44.697551  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.697563  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.697579  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.700370  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.700389  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.700399  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.700408  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.700415  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.700424  706399 round_trippers.go:580]     Audit-Id: a8f89c02-db62-4dfd-aeec-c6d8bec7c55d
	I1218 11:53:44.700432  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.700440  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.702801  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"851","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1218 11:53:44.703704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.703722  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.703731  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.703740  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.706249  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.706267  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.706274  706399 round_trippers.go:580]     Audit-Id: 7ca2ece9-43e8-49c0-b944-aa148d24246d
	I1218 11:53:44.706279  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.706284  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.706289  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.706295  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.706308  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.706518  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.706859  706399 pod_ready.go:92] pod "kube-controller-manager-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.706874  706399 pod_ready.go:81] duration metric: took 9.405069ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706885  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:44.706954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.706961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.706969  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.709895  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.709910  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.709916  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.709921  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.709926  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.709931  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.709936  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.709941  706399 round_trippers.go:580]     Audit-Id: 73ccb16b-4b09-4e96-9ff3-b6875d4dcebf
	I1218 11:53:44.710221  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:44.710653  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:44.710668  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.710679  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.710689  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.713326  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.713340  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.713347  706399 round_trippers.go:580]     Audit-Id: 51b3f6a6-746d-4c41-89de-3e3d10f2ac93
	I1218 11:53:44.713367  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.713375  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.713380  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.713385  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.713396  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.713985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:44.714201  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.714214  706399 pod_ready.go:81] duration metric: took 7.323276ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.714224  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.873723  706399 request.go:629] Waited for 159.413698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.873835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.873846  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.876813  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.876855  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.876866  706399 round_trippers.go:580]     Audit-Id: 80d37f73-516f-4df0-a715-29b05d26f212
	I1218 11:53:44.876872  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.876878  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.876883  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.876888  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.876895  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.877037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:45.074061  706399 request.go:629] Waited for 196.407368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074141  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.074148  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.074154  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.076973  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.077001  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.077013  706399 round_trippers.go:580]     Audit-Id: 0150f682-9003-42e6-95c5-4a92f0ba4920
	I1218 11:53:45.077022  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.077031  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.077040  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.077046  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.077051  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.077151  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:45.077554  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.077579  706399 pod_ready.go:81] duration metric: took 363.348514ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.077591  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.273746  706399 request.go:629] Waited for 196.06681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273821  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273827  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.273835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.273842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.276787  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.276809  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.276816  706399 round_trippers.go:580]     Audit-Id: a6700efe-44c9-4e0b-ab8b-4cceb94a69cc
	I1218 11:53:45.276825  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.276834  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.276842  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.276850  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.276859  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.277036  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"782","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1218 11:53:45.474033  706399 request.go:629] Waited for 196.438047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474131  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474142  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.474156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.474169  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.477824  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.477853  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.477864  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.477873  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.477880  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.477889  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.477897  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.477909  706399 round_trippers.go:580]     Audit-Id: c36a7602-f8f6-447c-85d1-76254cd38665
	I1218 11:53:45.478069  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.478494  706399 pod_ready.go:92] pod "kube-proxy-jf8kx" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.478515  706399 pod_ready.go:81] duration metric: took 400.917905ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.478525  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.673370  706399 request.go:629] Waited for 194.759725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673457  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.673471  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.673480  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.677105  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.677128  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.677137  706399 round_trippers.go:580]     Audit-Id: 5a30c9ba-0617-498f-83e0-396ac7b0a17b
	I1218 11:53:45.677145  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.677153  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.677160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.677167  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.677180  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.677824  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"862","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1218 11:53:45.873712  706399 request.go:629] Waited for 195.397858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873812  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.873831  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.873837  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.876889  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.876911  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.876918  706399 round_trippers.go:580]     Audit-Id: bfe725d9-9c70-4dc7-bd45-d55e484f467a
	I1218 11:53:45.876924  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.876928  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.876933  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.876938  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.876943  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.877172  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.877490  706399 pod_ready.go:92] pod "kube-scheduler-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.877504  706399 pod_ready.go:81] duration metric: took 398.969668ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.877517  706399 pod_ready.go:38] duration metric: took 12.212180593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:45.877535  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:45.877585  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:45.893465  706399 command_runner.go:130] > 1729
	I1218 11:53:45.893561  706399 api_server.go:72] duration metric: took 14.907630232s to wait for apiserver process to appear ...
	I1218 11:53:45.893577  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:45.893601  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:45.899790  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:45.899867  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:45.899873  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.899881  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.899887  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.901094  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:45.901120  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.901128  706399 round_trippers.go:580]     Audit-Id: 7b85e82b-ec64-4584-8946-326f560ec5fc
	I1218 11:53:45.901134  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.901139  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.901145  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.901150  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.901156  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:45.901164  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.901186  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:45.901243  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:45.901259  706399 api_server.go:131] duration metric: took 7.675448ms to wait for apiserver health ...
	I1218 11:53:45.901267  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:46.073761  706399 request.go:629] Waited for 172.377393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073824  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073837  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.073845  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.073851  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.078255  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.078283  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.078291  706399 round_trippers.go:580]     Audit-Id: 8a8a3f91-2b40-4ed6-8673-2e9287ce0bf7
	I1218 11:53:46.078296  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.078302  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.078307  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.078312  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.078317  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.079532  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.083180  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:46.083218  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.083226  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.083231  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.083237  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.083242  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.083248  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.083263  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.083274  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.083283  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.083290  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.083299  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.083306  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.083317  706399 system_pods.go:74] duration metric: took 182.043479ms to wait for pod list to return data ...
	I1218 11:53:46.083328  706399 default_sa.go:34] waiting for default service account to be created ...
	I1218 11:53:46.273839  706399 request.go:629] Waited for 190.41018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273914  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273919  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.273928  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.273934  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.277176  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.277201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.277209  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.277219  706399 round_trippers.go:580]     Content-Length: 261
	I1218 11:53:46.277227  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.277236  706399 round_trippers.go:580]     Audit-Id: 8fb527bf-40a9-449e-b359-393d44708047
	I1218 11:53:46.277245  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.277251  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.277260  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.277289  706399 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d939767d-22df-4871-b1e9-1f264cd78bb5","resourceVersion":"351","creationTimestamp":"2023-12-18T11:49:29Z"}}]}
	I1218 11:53:46.277563  706399 default_sa.go:45] found service account: "default"
	I1218 11:53:46.277611  706399 default_sa.go:55] duration metric: took 194.253503ms for default service account to be created ...
	I1218 11:53:46.277627  706399 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 11:53:46.474114  706399 request.go:629] Waited for 196.394547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474195  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474203  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.474215  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.474228  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.478438  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.478468  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.478479  706399 round_trippers.go:580]     Audit-Id: bb45fe89-dded-417e-8392-f9b3d76b81f5
	I1218 11:53:46.478488  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.478496  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.478505  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.478512  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.478528  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.479114  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.481559  706399 system_pods.go:86] 12 kube-system pods found
	I1218 11:53:46.481584  706399 system_pods.go:89] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.481592  706399 system_pods.go:89] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.481599  706399 system_pods.go:89] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.481605  706399 system_pods.go:89] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.481610  706399 system_pods.go:89] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.481619  706399 system_pods.go:89] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.481627  706399 system_pods.go:89] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.481634  706399 system_pods.go:89] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.481643  706399 system_pods.go:89] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.481651  706399 system_pods.go:89] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.481658  706399 system_pods.go:89] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.481667  706399 system_pods.go:89] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.481677  706399 system_pods.go:126] duration metric: took 204.042426ms to wait for k8s-apps to be running ...
	I1218 11:53:46.481690  706399 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 11:53:46.481747  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:46.496708  706399 system_svc.go:56] duration metric: took 15.008248ms WaitForService to wait for kubelet.
	I1218 11:53:46.496742  706399 kubeadm.go:581] duration metric: took 15.510812865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 11:53:46.496766  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:46.674277  706399 request.go:629] Waited for 177.41815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674357  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674362  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.674418  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.674489  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.677744  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.677763  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.677771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.677777  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.677783  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.677788  706399 round_trippers.go:580]     Audit-Id: 127b003d-0ea0-41a7-833f-6b9650904cf1
	I1218 11:53:46.677794  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.677803  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.678201  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14648 chars]
	I1218 11:53:46.678828  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678850  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678863  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678867  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678872  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678875  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678879  706399 node_conditions.go:105] duration metric: took 182.108972ms to run NodePressure ...
	I1218 11:53:46.678892  706399 start.go:228] waiting for startup goroutines ...
	I1218 11:53:46.678901  706399 start.go:233] waiting for cluster config update ...
	I1218 11:53:46.678914  706399 start.go:242] writing updated cluster config ...
	I1218 11:53:46.679419  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:46.679525  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.683229  706399 out.go:177] * Starting worker node multinode-107476-m02 in cluster multinode-107476
	I1218 11:53:46.684696  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:46.684730  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:53:46.684832  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:53:46.684846  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:53:46.684979  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.685210  706399 start.go:365] acquiring machines lock for multinode-107476-m02: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:53:46.685261  706399 start.go:369] acquired machines lock for "multinode-107476-m02" in 28.185µs
	I1218 11:53:46.685282  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:53:46.685293  706399 fix.go:54] fixHost starting: m02
	I1218 11:53:46.685600  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:53:46.685626  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:53:46.700004  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I1218 11:53:46.700443  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:53:46.700912  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:53:46.700933  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:53:46.701277  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:53:46.701452  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:53:46.701622  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:53:46.703098  706399 fix.go:102] recreateIfNeeded on multinode-107476-m02: state=Stopped err=<nil>
	I1218 11:53:46.703120  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	W1218 11:53:46.703304  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:53:46.705286  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476-m02" ...
	I1218 11:53:46.706596  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .Start
	I1218 11:53:46.706784  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring networks are active...
	I1218 11:53:46.707411  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network default is active
	I1218 11:53:46.707790  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network mk-multinode-107476 is active
	I1218 11:53:46.708193  706399 main.go:141] libmachine: (multinode-107476-m02) Getting domain xml...
	I1218 11:53:46.708862  706399 main.go:141] libmachine: (multinode-107476-m02) Creating domain...
	I1218 11:53:47.936995  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting to get IP...
	I1218 11:53:47.937889  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:47.938288  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:47.938375  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:47.938256  706643 retry.go:31] will retry after 227.139333ms: waiting for machine to come up
	I1218 11:53:48.166820  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.167284  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.167314  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.167220  706643 retry.go:31] will retry after 375.610064ms: waiting for machine to come up
	I1218 11:53:48.544738  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.545081  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.545107  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.545047  706643 retry.go:31] will retry after 378.162219ms: waiting for machine to come up
	I1218 11:53:48.924609  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.925035  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.925066  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.924973  706643 retry.go:31] will retry after 372.216471ms: waiting for machine to come up
	I1218 11:53:49.298428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.298906  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.298931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.298873  706643 retry.go:31] will retry after 655.95423ms: waiting for machine to come up
	I1218 11:53:49.956567  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.957078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.957106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.957030  706643 retry.go:31] will retry after 860.476893ms: waiting for machine to come up
	I1218 11:53:50.819121  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:50.819479  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:50.819506  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:50.819449  706643 retry.go:31] will retry after 763.336427ms: waiting for machine to come up
	I1218 11:53:51.585019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:51.585507  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:51.585542  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:51.585441  706643 retry.go:31] will retry after 963.292989ms: waiting for machine to come up
	I1218 11:53:52.550108  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:52.550472  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:52.550529  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:52.550417  706643 retry.go:31] will retry after 1.166437684s: waiting for machine to come up
	I1218 11:53:53.718762  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:53.719219  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:53.719252  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:53.719160  706643 retry.go:31] will retry after 2.253762045s: waiting for machine to come up
	I1218 11:53:55.974428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:55.974863  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:55.974891  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:55.974822  706643 retry.go:31] will retry after 2.547747733s: waiting for machine to come up
	I1218 11:53:58.523817  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:58.524293  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:58.524342  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:58.524169  706643 retry.go:31] will retry after 2.214783254s: waiting for machine to come up
	I1218 11:54:00.740859  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:00.741279  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:54:00.741308  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:54:00.741245  706643 retry.go:31] will retry after 4.522253429s: waiting for machine to come up
	I1218 11:54:05.267134  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.267545  706399 main.go:141] libmachine: (multinode-107476-m02) Found IP for machine: 192.168.39.238
	I1218 11:54:05.267562  706399 main.go:141] libmachine: (multinode-107476-m02) Reserving static IP address...
	I1218 11:54:05.267572  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has current primary IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.268162  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.268198  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"}
	I1218 11:54:05.268217  706399 main.go:141] libmachine: (multinode-107476-m02) Reserved static IP address: 192.168.39.238
	I1218 11:54:05.268237  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting for SSH to be available...
	I1218 11:54:05.268253  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Getting to WaitForSSH function...
	I1218 11:54:05.270329  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270682  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.270713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270879  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH client type: external
	I1218 11:54:05.270921  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa (-rw-------)
	I1218 11:54:05.270945  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:54:05.270955  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | About to run SSH command:
	I1218 11:54:05.270967  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | exit 0
	I1218 11:54:05.359260  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | SSH cmd err, output: <nil>: 
	I1218 11:54:05.359669  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetConfigRaw
	I1218 11:54:05.360312  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.362713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363152  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.363183  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363469  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:54:05.363688  706399 machine.go:88] provisioning docker machine ...
	I1218 11:54:05.363708  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.363941  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364144  706399 buildroot.go:166] provisioning hostname "multinode-107476-m02"
	I1218 11:54:05.364165  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364403  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.366681  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.367106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367207  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.367386  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367524  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367640  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.367789  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.368264  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.368292  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476-m02 && echo "multinode-107476-m02" | sudo tee /etc/hostname
	I1218 11:54:05.497634  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476-m02
	
	I1218 11:54:05.497668  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.500537  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.500970  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.501003  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.501203  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.501432  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501618  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501779  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.501985  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.502309  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.502328  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:54:05.623703  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:54:05.623739  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:54:05.623762  706399 buildroot.go:174] setting up certificates
	I1218 11:54:05.623773  706399 provision.go:83] configureAuth start
	I1218 11:54:05.623782  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.624072  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.626748  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627115  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.627143  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627342  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.629559  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.629885  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.629931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.630011  706399 provision.go:138] copyHostCerts
	I1218 11:54:05.630042  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630074  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:54:05.630086  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630147  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:54:05.630219  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630242  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:54:05.630249  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630271  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:54:05.630313  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630328  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:54:05.630334  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630353  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:54:05.630395  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476-m02 san=[192.168.39.238 192.168.39.238 localhost 127.0.0.1 minikube multinode-107476-m02]
	I1218 11:54:05.741217  706399 provision.go:172] copyRemoteCerts
	I1218 11:54:05.741280  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:54:05.741305  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.744095  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744415  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.744451  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744641  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.744867  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.745081  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.745239  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:05.832540  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:54:05.832629  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:54:05.857130  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:54:05.857201  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1218 11:54:05.880270  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:54:05.880339  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 11:54:05.904290  706399 provision.go:86] duration metric: configureAuth took 280.501532ms
	I1218 11:54:05.904323  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:54:05.904615  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:54:05.904650  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.904939  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.907613  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.908060  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908259  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.908465  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908634  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908797  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.908991  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.909320  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.909336  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:54:06.025905  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:54:06.025936  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:54:06.026101  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:54:06.026127  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.029047  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029390  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.029429  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029644  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.029864  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030054  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030178  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.030331  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.030646  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.030705  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:54:06.156093  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:54:06.156134  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.159082  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159496  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.159528  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159684  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.159913  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160156  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160304  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.160478  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.160807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.160825  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:54:07.046577  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:54:07.046609  706399 machine.go:91] provisioned docker machine in 1.68290659s
	I1218 11:54:07.046627  706399 start.go:300] post-start starting for "multinode-107476-m02" (driver="kvm2")
	I1218 11:54:07.046641  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:54:07.046672  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.047004  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:54:07.047085  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.049936  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050337  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.050373  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050532  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.050720  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.050893  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.051075  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.137937  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:54:07.141965  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:54:07.141990  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:54:07.141996  706399 command_runner.go:130] > ID=buildroot
	I1218 11:54:07.142004  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:54:07.142016  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:54:07.142062  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:54:07.142079  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:54:07.142150  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:54:07.142249  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:54:07.142262  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:54:07.142338  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:54:07.150461  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:54:07.173512  706399 start.go:303] post-start completed in 126.867172ms
	I1218 11:54:07.173544  706399 fix.go:56] fixHost completed within 20.488252806s
	I1218 11:54:07.173567  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.176291  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176751  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.176783  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176950  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.177185  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177343  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177560  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.177727  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:07.178069  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:07.178084  706399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1218 11:54:07.292631  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900447.242005495
	
	I1218 11:54:07.292655  706399 fix.go:206] guest clock: 1702900447.242005495
	I1218 11:54:07.292662  706399 fix.go:219] Guest: 2023-12-18 11:54:07.242005495 +0000 UTC Remote: 2023-12-18 11:54:07.173548129 +0000 UTC m=+83.636906782 (delta=68.457366ms)
	I1218 11:54:07.292718  706399 fix.go:190] guest clock delta is within tolerance: 68.457366ms
	I1218 11:54:07.292725  706399 start.go:83] releasing machines lock for "multinode-107476-m02", held for 20.607451202s
	I1218 11:54:07.292751  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.293062  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:07.295732  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.296145  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.296179  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.298392  706399 out.go:177] * Found network options:
	I1218 11:54:07.299731  706399 out.go:177]   - NO_PROXY=192.168.39.124
	W1218 11:54:07.301071  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.301110  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301626  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301817  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301902  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:54:07.301942  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	W1218 11:54:07.302000  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.302076  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:54:07.302097  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.304593  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304845  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304987  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305018  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305124  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305254  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305278  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305303  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305455  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305523  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305617  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305681  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.305742  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305842  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.391351  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:54:07.412687  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:54:07.412710  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:54:07.412781  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:54:07.429410  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:54:07.429693  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:54:07.429717  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.429853  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.445443  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:54:07.445529  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:54:07.455706  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:54:07.465480  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:54:07.465531  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:54:07.475348  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.485332  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:54:07.495743  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.505751  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:54:07.515919  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:54:07.525808  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:54:07.534674  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:54:07.534812  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:54:07.544293  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:07.647636  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:54:07.664455  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.664544  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:54:07.678392  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:54:07.678419  706399 command_runner.go:130] > [Unit]
	I1218 11:54:07.678429  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:54:07.678438  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:54:07.678446  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:54:07.678454  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:54:07.678468  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:54:07.678475  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:54:07.678482  706399 command_runner.go:130] > [Service]
	I1218 11:54:07.678489  706399 command_runner.go:130] > Type=notify
	I1218 11:54:07.678499  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:54:07.678506  706399 command_runner.go:130] > Environment=NO_PROXY=192.168.39.124
	I1218 11:54:07.678522  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:54:07.678539  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:54:07.678552  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:54:07.678569  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:54:07.678579  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:54:07.678623  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:54:07.678642  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:54:07.678658  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:54:07.678672  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:54:07.678681  706399 command_runner.go:130] > ExecStart=
	I1218 11:54:07.678704  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:54:07.678716  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:54:07.678732  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:54:07.678739  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:54:07.678746  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:54:07.678750  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:54:07.678754  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:54:07.678759  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:54:07.678767  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:54:07.678773  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:54:07.678779  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:54:07.678786  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:54:07.678790  706399 command_runner.go:130] > Delegate=yes
	I1218 11:54:07.678797  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:54:07.678805  706399 command_runner.go:130] > KillMode=process
	I1218 11:54:07.678811  706399 command_runner.go:130] > [Install]
	I1218 11:54:07.678817  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:54:07.678881  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.699422  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:54:07.717253  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.729421  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.740150  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:54:07.771472  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.783922  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.801472  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:54:07.801565  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:54:07.805378  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:54:07.805607  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:54:07.814619  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:54:07.830501  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:54:07.940117  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:54:08.043122  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:54:08.043192  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:54:08.059638  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:08.160537  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:54:09.625721  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4651404s)
	I1218 11:54:09.625800  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.727037  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:54:09.837890  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.952084  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:10.068114  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:54:10.082662  706399 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1218 11:54:10.083512  706399 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1218 11:54:10.094378  706399 command_runner.go:130] > -- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	I1218 11:54:10.094403  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094413  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094426  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094437  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094447  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094463  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094476  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094488  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094501  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094509  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094518  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094526  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094544  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094553  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094561  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094570  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094579  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094587  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094596  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094607  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094618  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1218 11:54:10.094628  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1218 11:54:10.097238  706399 out.go:177] 
	W1218 11:54:10.099022  706399 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1218 11:54:10.099052  706399 out.go:239] * 
	* 
	W1218 11:54:10.099923  706399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 11:54:10.101451  706399 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-107476" : exit status 90
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107476
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-107476 -n multinode-107476
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 logs -n 25
E1218 11:54:11.708880  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-107476 logs -n 25: (1.417445893s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476:/home/docker/cp-test_multinode-107476-m02_multinode-107476.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476 sudo cat                                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m02_multinode-107476.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03:/home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476-m03 sudo cat                                   | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp testdata/cp-test.txt                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476:/home/docker/cp-test_multinode-107476-m03_multinode-107476.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476 sudo cat                                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m03_multinode-107476.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02:/home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476-m02 sudo cat                                   | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-107476 node stop m03                                                          | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	| node    | multinode-107476 node start                                                             | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:52 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-107476                                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC |                     |
	| stop    | -p multinode-107476                                                                     | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC | 18 Dec 23 11:52 UTC |
	| start   | -p multinode-107476                                                                     | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-107476                                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:54 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:52:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:52:43.588877  706399 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:52:43.589039  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589053  706399 out.go:309] Setting ErrFile to fd 2...
	I1218 11:52:43.589061  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589245  706399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:52:43.589801  706399 out.go:303] Setting JSON to false
	I1218 11:52:43.590759  706399 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12910,"bootTime":1702887454,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:52:43.590822  706399 start.go:138] virtualization: kvm guest
	I1218 11:52:43.593457  706399 out.go:177] * [multinode-107476] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:52:43.595324  706399 notify.go:220] Checking for updates...
	I1218 11:52:43.595332  706399 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:52:43.597000  706399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:52:43.598742  706399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:52:43.600311  706399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:52:43.601844  706399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 11:52:43.603279  706399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:52:43.605238  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:52:43.605343  706399 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:52:43.605808  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.605854  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.620145  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I1218 11:52:43.620579  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.621112  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.621138  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.621497  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.621692  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.657009  706399 out.go:177] * Using the kvm2 driver based on existing profile
	I1218 11:52:43.658657  706399 start.go:298] selected driver: kvm2
	I1218 11:52:43.658673  706399 start.go:902] validating driver "kvm2" against &{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.658875  706399 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:52:43.659246  706399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.659332  706399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:52:43.674156  706399 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:52:43.674836  706399 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 11:52:43.674935  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:52:43.674959  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:52:43.674972  706399 start_flags.go:323] config:
	{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false ist
io-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.675263  706399 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.677310  706399 out.go:177] * Starting control plane node multinode-107476 in cluster multinode-107476
	I1218 11:52:43.678882  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:52:43.678926  706399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:52:43.678945  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:52:43.679040  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:52:43.679053  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:52:43.679182  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:52:43.679387  706399 start.go:365] acquiring machines lock for multinode-107476: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:52:43.679439  706399 start.go:369] acquired machines lock for "multinode-107476" in 30.186µs
	I1218 11:52:43.679462  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:52:43.679473  706399 fix.go:54] fixHost starting: 
	I1218 11:52:43.679818  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.679872  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.693824  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1218 11:52:43.694215  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.694677  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.694699  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.695098  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.695284  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.695482  706399 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:52:43.697182  706399 fix.go:102] recreateIfNeeded on multinode-107476: state=Stopped err=<nil>
	I1218 11:52:43.697205  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	W1218 11:52:43.697378  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:52:43.699486  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476" ...
	I1218 11:52:43.701188  706399 main.go:141] libmachine: (multinode-107476) Calling .Start
	I1218 11:52:43.701381  706399 main.go:141] libmachine: (multinode-107476) Ensuring networks are active...
	I1218 11:52:43.702137  706399 main.go:141] libmachine: (multinode-107476) Ensuring network default is active
	I1218 11:52:43.702575  706399 main.go:141] libmachine: (multinode-107476) Ensuring network mk-multinode-107476 is active
	I1218 11:52:43.702882  706399 main.go:141] libmachine: (multinode-107476) Getting domain xml...
	I1218 11:52:43.703479  706399 main.go:141] libmachine: (multinode-107476) Creating domain...
	I1218 11:52:44.937955  706399 main.go:141] libmachine: (multinode-107476) Waiting to get IP...
	I1218 11:52:44.939039  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:44.939474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:44.939585  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:44.939441  706428 retry.go:31] will retry after 295.497233ms: waiting for machine to come up
	I1218 11:52:45.237103  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.237598  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.237650  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.237528  706428 retry.go:31] will retry after 241.852686ms: waiting for machine to come up
	I1218 11:52:45.481091  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.481474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.481504  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.481425  706428 retry.go:31] will retry after 405.008398ms: waiting for machine to come up
	I1218 11:52:45.887993  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.888530  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.888561  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.888436  706428 retry.go:31] will retry after 596.878679ms: waiting for machine to come up
	I1218 11:52:46.487207  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.487686  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.487723  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.487646  706428 retry.go:31] will retry after 479.661609ms: waiting for machine to come up
	I1218 11:52:46.969331  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.969779  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.969813  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.969718  706428 retry.go:31] will retry after 695.785621ms: waiting for machine to come up
	I1218 11:52:47.666484  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:47.666895  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:47.666928  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:47.666826  706428 retry.go:31] will retry after 798.848059ms: waiting for machine to come up
	I1218 11:52:48.466719  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:48.467146  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:48.467178  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:48.467086  706428 retry.go:31] will retry after 1.485767878s: waiting for machine to come up
	I1218 11:52:49.954305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:49.954699  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:49.954749  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:49.954654  706428 retry.go:31] will retry after 1.819619299s: waiting for machine to come up
	I1218 11:52:51.776607  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:51.776992  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:51.777016  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:51.776952  706428 retry.go:31] will retry after 2.317000445s: waiting for machine to come up
	I1218 11:52:54.096025  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:54.096436  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:54.096462  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:54.096372  706428 retry.go:31] will retry after 2.107748825s: waiting for machine to come up
	I1218 11:52:56.206568  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:56.206940  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:56.206971  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:56.206886  706428 retry.go:31] will retry after 2.701224561s: waiting for machine to come up
	I1218 11:52:58.909780  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:58.910163  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:58.910194  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:58.910118  706428 retry.go:31] will retry after 4.332174915s: waiting for machine to come up
	I1218 11:53:03.247678  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248150  706399 main.go:141] libmachine: (multinode-107476) Found IP for machine: 192.168.39.124
	I1218 11:53:03.248181  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has current primary IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248192  706399 main.go:141] libmachine: (multinode-107476) Reserving static IP address...
	I1218 11:53:03.248681  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.248710  706399 main.go:141] libmachine: (multinode-107476) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"}
	I1218 11:53:03.248725  706399 main.go:141] libmachine: (multinode-107476) Reserved static IP address: 192.168.39.124
	I1218 11:53:03.248735  706399 main.go:141] libmachine: (multinode-107476) DBG | Getting to WaitForSSH function...
	I1218 11:53:03.248752  706399 main.go:141] libmachine: (multinode-107476) Waiting for SSH to be available...
	I1218 11:53:03.250850  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251272  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.251305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251380  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH client type: external
	I1218 11:53:03.251431  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa (-rw-------)
	I1218 11:53:03.251495  706399 main.go:141] libmachine: (multinode-107476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:53:03.251518  706399 main.go:141] libmachine: (multinode-107476) DBG | About to run SSH command:
	I1218 11:53:03.251537  706399 main.go:141] libmachine: (multinode-107476) DBG | exit 0
	I1218 11:53:03.347693  706399 main.go:141] libmachine: (multinode-107476) DBG | SSH cmd err, output: <nil>: 
	I1218 11:53:03.348069  706399 main.go:141] libmachine: (multinode-107476) Calling .GetConfigRaw
	I1218 11:53:03.348923  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.351464  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.351874  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.351906  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.352189  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:03.352408  706399 machine.go:88] provisioning docker machine ...
	I1218 11:53:03.352426  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.352628  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.352841  706399 buildroot.go:166] provisioning hostname "multinode-107476"
	I1218 11:53:03.352861  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.353044  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.355260  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355633  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.355665  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355775  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.355965  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356114  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356209  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.356327  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.356684  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.356702  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476 && echo "multinode-107476" | sudo tee /etc/hostname
	I1218 11:53:03.495478  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476
	
	I1218 11:53:03.495519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.498288  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.498747  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.498802  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.499026  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.499258  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499423  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499560  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.499796  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.500102  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.500118  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:53:03.636275  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:53:03.636312  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:53:03.636332  706399 buildroot.go:174] setting up certificates
	I1218 11:53:03.636351  706399 provision.go:83] configureAuth start
	I1218 11:53:03.636370  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.636693  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.639303  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639759  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.639801  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639935  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.641968  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642455  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.642483  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642629  706399 provision.go:138] copyHostCerts
	I1218 11:53:03.642664  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642722  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:53:03.642737  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642819  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:53:03.642933  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.642958  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:53:03.642970  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.643012  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:53:03.643087  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643118  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:53:03.643123  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643155  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:53:03.643235  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476 san=[192.168.39.124 192.168.39.124 localhost 127.0.0.1 minikube multinode-107476]
	I1218 11:53:03.728895  706399 provision.go:172] copyRemoteCerts
	I1218 11:53:03.728965  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:53:03.728993  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.732532  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733011  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.733057  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733166  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.733459  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.733658  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.733825  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:03.829438  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:53:03.829540  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:53:03.851440  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:53:03.851526  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 11:53:03.872997  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:53:03.873064  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 11:53:03.894126  706399 provision.go:86] duration metric: configureAuth took 257.762653ms
	I1218 11:53:03.894171  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:53:03.894430  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:03.894459  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.894777  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.897379  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897774  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.897800  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897918  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.898164  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898354  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.898720  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.899054  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.899067  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:53:04.029431  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:53:04.029454  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:53:04.029610  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:53:04.029643  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.032284  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032632  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.032657  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032884  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.033092  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033244  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033356  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.033497  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.033807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.033872  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:53:04.172200  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:53:04.172259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.175231  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175567  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.175603  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175767  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.175973  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176163  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176296  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.176471  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.176900  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.176921  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:53:05.124159  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:53:05.124189  706399 machine.go:91] provisioned docker machine in 1.771768968s
	I1218 11:53:05.124202  706399 start.go:300] post-start starting for "multinode-107476" (driver="kvm2")
	I1218 11:53:05.124213  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:53:05.124248  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.124618  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:53:05.124659  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.127177  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127511  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.127543  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.128019  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.128232  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.128365  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.221325  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:53:05.225431  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:53:05.225452  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:53:05.225458  706399 command_runner.go:130] > ID=buildroot
	I1218 11:53:05.225465  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:53:05.225470  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:53:05.225498  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:53:05.225513  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:53:05.225581  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:53:05.225689  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:53:05.225707  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:53:05.225825  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:53:05.234060  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:05.256308  706399 start.go:303] post-start completed in 132.091269ms
	I1218 11:53:05.256346  706399 fix.go:56] fixHost completed within 21.576872921s
	I1218 11:53:05.256378  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.259066  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259438  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.259467  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259594  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.259822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260000  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260132  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.260300  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:05.260663  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:05.260677  706399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 11:53:05.388710  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900385.336515708
	
	I1218 11:53:05.388739  706399 fix.go:206] guest clock: 1702900385.336515708
	I1218 11:53:05.388748  706399 fix.go:219] Guest: 2023-12-18 11:53:05.336515708 +0000 UTC Remote: 2023-12-18 11:53:05.256351307 +0000 UTC m=+21.719709962 (delta=80.164401ms)
	I1218 11:53:05.388776  706399 fix.go:190] guest clock delta is within tolerance: 80.164401ms
	I1218 11:53:05.388781  706399 start.go:83] releasing machines lock for "multinode-107476", held for 21.709329749s
	I1218 11:53:05.388800  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.389070  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:05.391842  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392255  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.392297  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392448  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.392945  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393126  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393230  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:53:05.393297  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.393344  706399 ssh_runner.go:195] Run: cat /version.json
	I1218 11:53:05.393374  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.396053  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396366  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396390  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396415  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396575  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.396796  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.396908  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396935  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396951  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397108  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.397138  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.397245  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.397399  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397526  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.484417  706399 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 11:53:05.484584  706399 ssh_runner.go:195] Run: systemctl --version
	I1218 11:53:05.515488  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:53:05.515582  706399 command_runner.go:130] > systemd 247 (247)
	I1218 11:53:05.515612  706399 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 11:53:05.515721  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:53:05.522226  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:53:05.522290  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:53:05.522345  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:53:05.538265  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:53:05.538337  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:53:05.538357  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.538518  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.556555  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:53:05.556669  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:53:05.566263  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:53:05.575359  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:53:05.575428  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:53:05.584526  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.593691  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:53:05.602941  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.612320  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:53:05.621674  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:53:05.630899  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:53:05.639775  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:53:05.640003  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:53:05.648244  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:05.747265  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:53:05.764104  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.764197  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:53:05.781204  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:53:05.781232  706399 command_runner.go:130] > [Unit]
	I1218 11:53:05.781238  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:53:05.781249  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:53:05.781255  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:53:05.781260  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:53:05.781269  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:53:05.781273  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:53:05.781277  706399 command_runner.go:130] > [Service]
	I1218 11:53:05.781283  706399 command_runner.go:130] > Type=notify
	I1218 11:53:05.781287  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:53:05.781294  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:53:05.781305  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:53:05.781312  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:53:05.781321  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:53:05.781332  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:53:05.781338  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:53:05.781348  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:53:05.781360  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:53:05.781374  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:53:05.781380  706399 command_runner.go:130] > ExecStart=
	I1218 11:53:05.781395  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:53:05.781406  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:53:05.781420  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:53:05.781437  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:53:05.781448  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:53:05.781457  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:53:05.781466  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:53:05.781478  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:53:05.781489  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:53:05.781503  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:53:05.781510  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:53:05.781518  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:53:05.781524  706399 command_runner.go:130] > Delegate=yes
	I1218 11:53:05.781533  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:53:05.781540  706399 command_runner.go:130] > KillMode=process
	I1218 11:53:05.781546  706399 command_runner.go:130] > [Install]
	I1218 11:53:05.781565  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:53:05.781637  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.804433  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:53:05.824109  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.835893  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.847147  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:53:05.877224  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.889672  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.907426  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:53:05.907507  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:53:05.910712  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:53:05.911118  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:53:05.919164  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:53:05.935395  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:53:06.037158  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:53:06.143405  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:53:06.143544  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:53:06.160341  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:06.269342  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:53:07.733823  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.464413724s)
	I1218 11:53:07.733899  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:07.833594  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:53:07.945199  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:08.049248  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.158198  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:53:08.174701  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.276820  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 11:53:08.358434  706399 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 11:53:08.358505  706399 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 11:53:08.364441  706399 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1218 11:53:08.364463  706399 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 11:53:08.364470  706399 command_runner.go:130] > Device: 16h/22d	Inode: 833         Links: 1
	I1218 11:53:08.364476  706399 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1218 11:53:08.364488  706399 command_runner.go:130] > Access: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364496  706399 command_runner.go:130] > Modify: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364506  706399 command_runner.go:130] > Change: 2023-12-18 11:53:08.240952217 +0000
	I1218 11:53:08.364516  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:08.364858  706399 start.go:543] Will wait 60s for crictl version
	I1218 11:53:08.364931  706399 ssh_runner.go:195] Run: which crictl
	I1218 11:53:08.368876  706399 command_runner.go:130] > /usr/bin/crictl
	I1218 11:53:08.369038  706399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 11:53:08.420803  706399 command_runner.go:130] > Version:  0.1.0
	I1218 11:53:08.420827  706399 command_runner.go:130] > RuntimeName:  docker
	I1218 11:53:08.420831  706399 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1218 11:53:08.420836  706399 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 11:53:08.420859  706399 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 11:53:08.420916  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.449342  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.450610  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.475832  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.478214  706399 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 11:53:08.478259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:08.481071  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481405  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:08.481434  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481669  706399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1218 11:53:08.485727  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.498500  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:08.498560  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.517432  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.517456  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.517461  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.517467  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.517472  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.517479  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.517488  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.517493  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.517498  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.517502  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.518427  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.518444  706399 docker.go:601] Images already preloaded, skipping extraction
	I1218 11:53:08.518497  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.540045  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.540071  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.540079  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.540103  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.540112  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.540125  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.540143  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.540151  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.540160  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.540172  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.540915  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.540940  706399 cache_images.go:84] Images are preloaded, skipping loading
	I1218 11:53:08.541003  706399 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 11:53:08.570799  706399 command_runner.go:130] > cgroupfs
	I1218 11:53:08.570938  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:08.570956  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:08.570983  706399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 11:53:08.571015  706399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107476 NodeName:multinode-107476 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 11:53:08.571172  706399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-107476"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 11:53:08.571284  706399 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-107476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 11:53:08.571354  706399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 11:53:08.580283  706399 command_runner.go:130] > kubeadm
	I1218 11:53:08.580300  706399 command_runner.go:130] > kubectl
	I1218 11:53:08.580304  706399 command_runner.go:130] > kubelet
	I1218 11:53:08.580321  706399 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 11:53:08.580377  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 11:53:08.588532  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1218 11:53:08.604728  706399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 11:53:08.620425  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1218 11:53:08.636780  706399 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1218 11:53:08.640548  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.652739  706399 certs.go:56] Setting up /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476 for IP: 192.168.39.124
	I1218 11:53:08.652776  706399 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1aed956519f14c4fcaee2b34a279c90e2b05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:08.652956  706399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key
	I1218 11:53:08.653001  706399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key
	I1218 11:53:08.653075  706399 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key
	I1218 11:53:08.653122  706399 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key.9675f833
	I1218 11:53:08.653155  706399 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key
	I1218 11:53:08.653165  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 11:53:08.653181  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 11:53:08.653193  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 11:53:08.653201  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 11:53:08.653213  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 11:53:08.653222  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 11:53:08.653233  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 11:53:08.653244  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 11:53:08.653292  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem (1338 bytes)
	W1218 11:53:08.653316  706399 certs.go:433] ignoring /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739_empty.pem, impossibly tiny 0 bytes
	I1218 11:53:08.653332  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 11:53:08.653359  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem (1082 bytes)
	I1218 11:53:08.653383  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem (1123 bytes)
	I1218 11:53:08.653409  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem (1679 bytes)
	I1218 11:53:08.653448  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:08.653474  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.653489  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.653501  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem -> /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.654088  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 11:53:08.677424  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 11:53:08.700082  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 11:53:08.722631  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 11:53:08.744711  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 11:53:08.766872  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 11:53:08.789385  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 11:53:08.812077  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 11:53:08.834610  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /usr/share/ca-certificates/6907392.pem (1708 bytes)
	I1218 11:53:08.857333  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 11:53:08.879344  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem --> /usr/share/ca-certificates/690739.pem (1338 bytes)
	I1218 11:53:08.901384  706399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 11:53:08.916780  706399 ssh_runner.go:195] Run: openssl version
	I1218 11:53:08.922282  706399 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1218 11:53:08.922341  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907392.pem && ln -fs /usr/share/ca-certificates/6907392.pem /etc/ssl/certs/6907392.pem"
	I1218 11:53:08.931642  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935749  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935958  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.936017  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.941156  706399 command_runner.go:130] > 3ec20f2e
	I1218 11:53:08.941471  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6907392.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 11:53:08.950462  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 11:53:08.959471  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963656  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963960  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.964002  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.969248  706399 command_runner.go:130] > b5213941
	I1218 11:53:08.969314  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 11:53:08.978275  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/690739.pem && ln -fs /usr/share/ca-certificates/690739.pem /etc/ssl/certs/690739.pem"
	I1218 11:53:08.987435  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991559  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991833  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991883  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.997219  706399 command_runner.go:130] > 51391683
	I1218 11:53:08.997300  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/690739.pem /etc/ssl/certs/51391683.0"
	I1218 11:53:09.007519  706399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 11:53:09.011748  706399 command_runner.go:130] > ca.crt
	I1218 11:53:09.011764  706399 command_runner.go:130] > ca.key
	I1218 11:53:09.011769  706399 command_runner.go:130] > healthcheck-client.crt
	I1218 11:53:09.011773  706399 command_runner.go:130] > healthcheck-client.key
	I1218 11:53:09.011778  706399 command_runner.go:130] > peer.crt
	I1218 11:53:09.011782  706399 command_runner.go:130] > peer.key
	I1218 11:53:09.011786  706399 command_runner.go:130] > server.crt
	I1218 11:53:09.011793  706399 command_runner.go:130] > server.key
	I1218 11:53:09.011883  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 11:53:09.017731  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.017835  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 11:53:09.023186  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.023240  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 11:53:09.028589  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.028641  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 11:53:09.033905  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.033983  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 11:53:09.039296  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.039520  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 11:53:09.044713  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.044770  706399 kubeadm.go:404] StartCluster: {Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:53:09.044901  706399 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:09.063644  706399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 11:53:09.072501  706399 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 11:53:09.072518  706399 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 11:53:09.072524  706399 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 11:53:09.072529  706399 command_runner.go:130] > member
	I1218 11:53:09.072549  706399 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1218 11:53:09.072562  706399 kubeadm.go:636] restartCluster start
	I1218 11:53:09.072621  706399 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 11:53:09.080707  706399 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.081213  706399 kubeconfig.go:135] verify returned: extract IP: "multinode-107476" does not appear in /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.081366  706399 kubeconfig.go:146] "multinode-107476" context is missing from /home/jenkins/minikube-integration/17824-683489/kubeconfig - will repair!
	I1218 11:53:09.081646  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:09.082090  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.082328  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:09.082929  706399 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 11:53:09.083156  706399 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 11:53:09.090938  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.090982  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.101227  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.591919  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.592030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.603387  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.091928  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.092030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.103288  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.591906  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.592032  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.602954  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.091515  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.091641  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.103090  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.591669  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.591804  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.603393  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.092006  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.092105  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.103893  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.591441  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.591518  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.602651  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.091237  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.091369  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.103118  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.590973  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.592383  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.603723  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.091222  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.091346  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.102533  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.591068  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.591166  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.602318  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.091932  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.092046  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.103581  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.591099  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.591204  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.602422  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.091999  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.092095  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.103457  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.591070  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.591174  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.602679  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.091238  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.091370  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.103125  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.591667  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.591745  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.602974  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.091582  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.091718  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.103155  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.591946  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.592225  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.603460  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.091322  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:19.091400  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:19.102630  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.102658  706399 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1218 11:53:19.102668  706399 kubeadm.go:1135] stopping kube-system containers ...
	I1218 11:53:19.102726  706399 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:19.126882  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.126909  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.126915  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.126921  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.126928  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.126934  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.126939  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.126946  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.126952  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.126961  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.126966  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.126975  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.126982  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.126996  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.127005  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.127012  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.127994  706399 docker.go:469] Stopping containers: [8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992]
	I1218 11:53:19.128071  706399 ssh_runner.go:195] Run: docker stop 8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992
	I1218 11:53:19.146845  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.146887  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.146894  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.148422  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.148444  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.148709  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.148746  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.150621  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.150979  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.150995  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.151009  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.151182  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.151421  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.151682  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.151693  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.151697  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.152748  706399 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 11:53:19.167208  706399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 11:53:19.175617  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 11:53:19.175659  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 11:53:19.175670  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 11:53:19.175682  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175764  706399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175829  706399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184086  706399 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184108  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.290255  706399 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 11:53:19.290616  706399 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 11:53:19.291271  706399 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 11:53:19.291767  706399 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 11:53:19.292523  706399 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1218 11:53:19.293290  706399 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1218 11:53:19.294173  706399 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1218 11:53:19.294659  706399 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1218 11:53:19.295268  706399 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1218 11:53:19.295750  706399 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 11:53:19.296399  706399 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 11:53:19.297138  706399 command_runner.go:130] > [certs] Using the existing "sa" key
	I1218 11:53:19.298557  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.350785  706399 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 11:53:19.458190  706399 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 11:53:19.753510  706399 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 11:53:19.917725  706399 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 11:53:20.041823  706399 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 11:53:20.044334  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.111720  706399 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 11:53:20.113879  706399 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 11:53:20.113900  706399 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 11:53:20.233250  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.333464  706399 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 11:53:20.333508  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 11:53:20.333519  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 11:53:20.333529  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 11:53:20.333603  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.388000  706399 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 11:53:20.403526  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:20.403632  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:20.904600  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.403801  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.904580  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.403835  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.903754  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.917660  706399 command_runner.go:130] > 1729
	I1218 11:53:22.922833  706399 api_server.go:72] duration metric: took 2.519305176s to wait for apiserver process to appear ...
	I1218 11:53:22.922860  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:22.922886  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:22.923542  706399 api_server.go:269] stopped: https://192.168.39.124:8443/healthz: Get "https://192.168.39.124:8443/healthz": dial tcp 192.168.39.124:8443: connect: connection refused
	I1218 11:53:23.423182  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.843152  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1218 11:53:25.843187  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1218 11:53:25.843205  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.909873  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.909925  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:25.922999  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.929359  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.929386  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.422960  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.428892  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.428928  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.923578  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.931290  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.931325  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:27.423966  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:27.429135  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:27.429243  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:27.429252  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:27.429261  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:27.429267  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:27.437137  706399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1218 11:53:27.437163  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:27.437172  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:27.437179  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:27.437187  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:27 GMT
	I1218 11:53:27.437194  706399 round_trippers.go:580]     Audit-Id: e12ea9f6-c15b-4448-831c-e69c87f78e83
	I1218 11:53:27.437211  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:27.437223  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:27.437234  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:27.437262  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:27.437348  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:27.437371  706399 api_server.go:131] duration metric: took 4.514501797s to wait for apiserver health ...
	I1218 11:53:27.437384  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:27.437394  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:27.439521  706399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 11:53:27.441036  706399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 11:53:27.450911  706399 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 11:53:27.450934  706399 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1218 11:53:27.450953  706399 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1218 11:53:27.450964  706399 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 11:53:27.450981  706399 command_runner.go:130] > Access: 2023-12-18 11:52:56.552952217 +0000
	I1218 11:53:27.450993  706399 command_runner.go:130] > Modify: 2023-12-13 23:27:31.000000000 +0000
	I1218 11:53:27.451003  706399 command_runner.go:130] > Change: 2023-12-18 11:52:54.793952217 +0000
	I1218 11:53:27.451013  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:27.458216  706399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 11:53:27.458236  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 11:53:27.509185  706399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 11:53:28.905245  706399 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.912521  706399 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.916523  706399 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1218 11:53:28.934945  706399 command_runner.go:130] > daemonset.apps/kindnet configured
	I1218 11:53:28.940934  706399 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.431702924s)
	I1218 11:53:28.940965  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:28.941087  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:28.941101  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.941113  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.941123  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.945051  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:28.945076  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.945086  706399 round_trippers.go:580]     Audit-Id: 6c622874-25a6-4b96-9b2e-4f49b904ff51
	I1218 11:53:28.945094  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.945102  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.945110  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.945118  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.945126  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.946529  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:28.950707  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:28.950736  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 11:53:28.950745  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1218 11:53:28.950751  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1218 11:53:28.950756  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:28.950760  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:28.950766  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1218 11:53:28.950775  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1218 11:53:28.950782  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:28.950792  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:28.950800  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1218 11:53:28.950809  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1218 11:53:28.950824  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 11:53:28.950832  706399 system_pods.go:74] duration metric: took 9.862056ms to wait for pod list to return data ...
	I1218 11:53:28.950839  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:28.950909  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:28.950918  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.950925  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.950931  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.953444  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:28.953475  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.953487  706399 round_trippers.go:580]     Audit-Id: 0d66de6b-1b8d-4012-9156-1fa20bb81935
	I1218 11:53:28.953495  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.953501  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.953508  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.953513  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.953519  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.953797  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14775 chars]
	I1218 11:53:28.954628  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954655  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954667  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954671  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954677  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954684  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954690  706399 node_conditions.go:105] duration metric: took 3.843221ms to run NodePressure ...
	I1218 11:53:28.954714  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:29.198463  706399 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 11:53:29.198489  706399 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 11:53:29.198613  706399 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1218 11:53:29.198764  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1218 11:53:29.198778  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.198790  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.198807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.202177  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.202201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.202208  706399 round_trippers.go:580]     Audit-Id: 19d0d8d5-e9c5-4d32-b655-9ad8a4c44da9
	I1218 11:53:29.202213  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.202218  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.202223  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.202228  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.202233  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.203368  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I1218 11:53:29.204464  706399 kubeadm.go:787] kubelet initialised
	I1218 11:53:29.204488  706399 kubeadm.go:788] duration metric: took 5.842944ms waiting for restarted kubelet to initialise ...
	I1218 11:53:29.204498  706399 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:29.204573  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:29.204584  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.204595  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.204613  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.208130  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.208151  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.208159  706399 round_trippers.go:580]     Audit-Id: 450b4722-b778-4d0a-aede-ee77ca9c229c
	I1218 11:53:29.208165  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.208171  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.208176  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.208181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.208208  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.209329  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:29.211875  706399 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.211970  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:29.211980  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.211991  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.212001  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.214577  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.214596  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.214603  706399 round_trippers.go:580]     Audit-Id: 9385acf6-1b01-4b3d-928c-439fe28d4f97
	I1218 11:53:29.214608  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.214613  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.214618  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.214623  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.214627  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.215229  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:29.215743  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.215765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.215776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.215783  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.217921  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.217938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.217944  706399 round_trippers.go:580]     Audit-Id: 8a8697ed-9283-4bdf-9239-28520f9f9b9f
	I1218 11:53:29.217950  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.217958  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.217968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.217977  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.217988  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.218120  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.218457  706399 pod_ready.go:97] node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218478  706399 pod_ready.go:81] duration metric: took 6.581675ms waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.218492  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218502  706399 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.218551  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:29.218558  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.218572  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.218585  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.220388  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.220404  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.220410  706399 round_trippers.go:580]     Audit-Id: 6774ec4b-7426-4031-ac00-5f3c00310f09
	I1218 11:53:29.220415  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.220420  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.220426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.220433  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.220442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.220551  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I1218 11:53:29.220938  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.220954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.220961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.220967  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.222861  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.222877  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.222886  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.222897  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.222905  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.222913  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.222925  706399 round_trippers.go:580]     Audit-Id: cdc058a1-0407-4522-ad4e-1bccaa86b8e0
	I1218 11:53:29.222934  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.223090  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.223369  706399 pod_ready.go:97] node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223386  706399 pod_ready.go:81] duration metric: took 4.874816ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.223394  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223412  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.223472  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:29.223479  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.223486  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.223496  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.225396  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.225413  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.225419  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.225425  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.225430  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.225435  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.225442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.225451  706399 round_trippers.go:580]     Audit-Id: 2464f96a-0515-46f9-8313-633c8eafb3b2
	I1218 11:53:29.225634  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"768","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I1218 11:53:29.225978  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.225994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.226001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.226006  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.227849  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.227867  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.227876  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.227884  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.227892  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.227900  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.227909  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.227916  706399 round_trippers.go:580]     Audit-Id: 8723f00d-f528-46cc-b34b-878c1dbe29bf
	I1218 11:53:29.228105  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.228354  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228374  706399 pod_ready.go:81] duration metric: took 4.951319ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.228382  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228387  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.228468  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:29.228478  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.228484  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.228490  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.234141  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:29.234160  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.234169  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.234176  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.234190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.234195  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.234201  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.234205  706399 round_trippers.go:580]     Audit-Id: e7a7e09c-4d05-4a64-917b-5e55b2c17b60
	I1218 11:53:29.234474  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"769","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I1218 11:53:29.342153  706399 request.go:629] Waited for 107.293593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342245  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342251  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.342259  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.342265  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.345014  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.345032  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.345039  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.345044  706399 round_trippers.go:580]     Audit-Id: 931285de-8f53-4e79-b792-460f413e4aff
	I1218 11:53:29.345049  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.345054  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.345059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.345068  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.345238  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.345553  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345573  706399 pod_ready.go:81] duration metric: took 117.178912ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.345582  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345593  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.542039  706399 request.go:629] Waited for 196.361004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542142  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542147  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.542156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.542162  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.544982  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.545002  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.545009  706399 round_trippers.go:580]     Audit-Id: e1d63858-4541-4ccd-a4da-08fd054a97e6
	I1218 11:53:29.545017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.545025  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.545033  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.545042  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.545058  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.545244  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:29.741997  706399 request.go:629] Waited for 196.344122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742076  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742082  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.742093  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.742117  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.744705  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.744733  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.744743  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.744751  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.744759  706399 round_trippers.go:580]     Audit-Id: eb4d544e-890a-4cf6-8b49-17e1c66fedd1
	I1218 11:53:29.744766  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.744775  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.744785  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.744985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:29.745330  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:29.745356  706399 pod_ready.go:81] duration metric: took 399.751355ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.745369  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.941544  706399 request.go:629] Waited for 196.09241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941631  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941639  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.941653  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.941664  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.944619  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.944641  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.944649  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.944654  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.944659  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.944665  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.944670  706399 round_trippers.go:580]     Audit-Id: 53295520-6dfc-40b0-aa42-f14c320fd991
	I1218 11:53:29.944675  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.945395  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:30.141176  706399 request.go:629] Waited for 195.305381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141277  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.141288  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.141294  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.144266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.144293  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.144304  706399 round_trippers.go:580]     Audit-Id: db750d14-63e9-423b-9181-601ba7e56368
	I1218 11:53:30.144313  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.144321  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.144328  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.144335  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.144342  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.144508  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:30.144910  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:30.144938  706399 pod_ready.go:81] duration metric: took 399.556805ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.144951  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.341891  706399 request.go:629] Waited for 196.832639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341974  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341981  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.341989  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.341996  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.344936  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.344960  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.344969  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.344976  706399 round_trippers.go:580]     Audit-Id: 6d7e686b-0932-465f-b25e-09aeb30d81ad
	I1218 11:53:30.344983  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.344990  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.344998  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.345005  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.345247  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"772","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1218 11:53:30.542107  706399 request.go:629] Waited for 196.385627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542172  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542176  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.542202  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.542210  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.545091  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.545113  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.545121  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.545130  706399 round_trippers.go:580]     Audit-Id: c21950d0-952e-42f1-995c-f068b90f04c0
	I1218 11:53:30.545138  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.545145  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.545153  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.545164  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.545578  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.545899  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545916  706399 pod_ready.go:81] duration metric: took 400.958711ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.545925  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545935  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.741971  706399 request.go:629] Waited for 195.944564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742047  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742052  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.742062  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.742069  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.745047  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.745075  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.745084  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.745092  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.745105  706399 round_trippers.go:580]     Audit-Id: 588c2353-9d7d-488b-a950-87bf03ba3da0
	I1218 11:53:30.745115  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.745122  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.745130  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.745381  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"770","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I1218 11:53:30.941089  706399 request.go:629] Waited for 195.312312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941185  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941199  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.941210  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.941216  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.944408  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.944434  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.944445  706399 round_trippers.go:580]     Audit-Id: 7a4702e0-308a-4d75-b115-eb14716b6830
	I1218 11:53:30.944453  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.944462  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.944474  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.944486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.944497  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.944675  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.945060  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945088  706399 pod_ready.go:81] duration metric: took 399.145466ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.945102  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945113  706399 pod_ready.go:38] duration metric: took 1.740603836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:30.945134  706399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 11:53:30.975551  706399 command_runner.go:130] > -16
	I1218 11:53:30.975760  706399 ops.go:34] apiserver oom_adj: -16
	I1218 11:53:30.975788  706399 kubeadm.go:640] restartCluster took 21.903211868s
	I1218 11:53:30.975799  706399 kubeadm.go:406] StartCluster complete in 21.931036061s
	I1218 11:53:30.975823  706399 settings.go:142] acquiring lock: {Name:mk1b55e0e8c256c6bc60d3bea159645d01ed78f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.975910  706399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.976662  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.976915  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 11:53:30.976953  706399 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 11:53:30.980045  706399 out.go:177] * Enabled addons: 
	I1218 11:53:30.977197  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:30.977270  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.981684  706399 addons.go:502] enable addons completed in 4.7055ms: enabled=[]
	I1218 11:53:30.982005  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:30.982452  706399 round_trippers.go:463] GET https://192.168.39.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 11:53:30.982466  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.982478  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.982487  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.985560  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.985590  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.985598  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.985604  706399 round_trippers.go:580]     Audit-Id: 733c0867-ba1a-4681-b566-8abcfe50d689
	I1218 11:53:30.985613  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.985627  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.985638  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.985644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.985652  706399 round_trippers.go:580]     Content-Length: 291
	I1218 11:53:30.985680  706399 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3f9d4717-a78b-4c7e-9f95-6ab3b5581a7f","resourceVersion":"778","creationTimestamp":"2023-12-18T11:49:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 11:53:30.985863  706399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107476" context rescaled to 1 replicas
	I1218 11:53:30.985895  706399 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 11:53:30.987695  706399 out.go:177] * Verifying Kubernetes components...
	I1218 11:53:30.989853  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:31.158210  706399 command_runner.go:130] > apiVersion: v1
	I1218 11:53:31.158232  706399 command_runner.go:130] > data:
	I1218 11:53:31.158237  706399 command_runner.go:130] >   Corefile: |
	I1218 11:53:31.158243  706399 command_runner.go:130] >     .:53 {
	I1218 11:53:31.158250  706399 command_runner.go:130] >         log
	I1218 11:53:31.158263  706399 command_runner.go:130] >         errors
	I1218 11:53:31.158271  706399 command_runner.go:130] >         health {
	I1218 11:53:31.158287  706399 command_runner.go:130] >            lameduck 5s
	I1218 11:53:31.158292  706399 command_runner.go:130] >         }
	I1218 11:53:31.158300  706399 command_runner.go:130] >         ready
	I1218 11:53:31.158309  706399 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 11:53:31.158313  706399 command_runner.go:130] >            pods insecure
	I1218 11:53:31.158325  706399 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 11:53:31.158335  706399 command_runner.go:130] >            ttl 30
	I1218 11:53:31.158342  706399 command_runner.go:130] >         }
	I1218 11:53:31.158352  706399 command_runner.go:130] >         prometheus :9153
	I1218 11:53:31.158360  706399 command_runner.go:130] >         hosts {
	I1218 11:53:31.158374  706399 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1218 11:53:31.158384  706399 command_runner.go:130] >            fallthrough
	I1218 11:53:31.158390  706399 command_runner.go:130] >         }
	I1218 11:53:31.158397  706399 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 11:53:31.158404  706399 command_runner.go:130] >            max_concurrent 1000
	I1218 11:53:31.158411  706399 command_runner.go:130] >         }
	I1218 11:53:31.158418  706399 command_runner.go:130] >         cache 30
	I1218 11:53:31.158434  706399 command_runner.go:130] >         loop
	I1218 11:53:31.158444  706399 command_runner.go:130] >         reload
	I1218 11:53:31.158453  706399 command_runner.go:130] >         loadbalance
	I1218 11:53:31.158462  706399 command_runner.go:130] >     }
	I1218 11:53:31.158472  706399 command_runner.go:130] > kind: ConfigMap
	I1218 11:53:31.158481  706399 command_runner.go:130] > metadata:
	I1218 11:53:31.158488  706399 command_runner.go:130] >   creationTimestamp: "2023-12-18T11:49:16Z"
	I1218 11:53:31.158492  706399 command_runner.go:130] >   name: coredns
	I1218 11:53:31.158498  706399 command_runner.go:130] >   namespace: kube-system
	I1218 11:53:31.158509  706399 command_runner.go:130] >   resourceVersion: "396"
	I1218 11:53:31.158517  706399 command_runner.go:130] >   uid: 9e09d417-7d67-4099-aeea-880a5f122cec
	I1218 11:53:31.161286  706399 node_ready.go:35] waiting up to 6m0s for node "multinode-107476" to be "Ready" ...
	I1218 11:53:31.161454  706399 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1218 11:53:31.161506  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.161526  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.161538  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.161551  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.164076  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:31.164092  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.164099  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.164104  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.164109  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.164114  706399 round_trippers.go:580]     Audit-Id: a1e2309c-5203-41c1-bdff-38bf4aa1b0e4
	I1218 11:53:31.164119  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.164124  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.164299  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:31.661958  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.661994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.662005  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.662014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.665299  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:31.665326  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.665337  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.665345  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.665354  706399 round_trippers.go:580]     Audit-Id: 715c021b-232b-46db-b224-0ee0e1d87bd0
	I1218 11:53:31.665364  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.665372  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.665383  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.665557  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.162261  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.162294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.162318  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.162328  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.165415  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:32.165445  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.165456  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.165465  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.165473  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.165480  706399 round_trippers.go:580]     Audit-Id: 8178c82f-f5df-4946-829a-8d607bef70f1
	I1218 11:53:32.165487  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.165494  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.165662  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.662421  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.662459  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.662472  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.662482  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.665000  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:32.665024  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.665031  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.665036  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.665044  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.665050  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.665055  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.665063  706399 round_trippers.go:580]     Audit-Id: acbd11c7-43ce-4b9c-970b-6cfe7595d19b
	I1218 11:53:32.665272  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.161915  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.161951  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.161964  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.161973  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.164679  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.164707  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.164718  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.164727  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.164734  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.164742  706399 round_trippers.go:580]     Audit-Id: 803dab92-ad10-4d1e-9c2c-02e13845c977
	I1218 11:53:33.164754  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.164761  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.164950  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.165364  706399 node_ready.go:58] node "multinode-107476" has status "Ready":"False"
	I1218 11:53:33.661704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.661729  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.661737  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.661743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.664502  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.664528  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.664537  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.664542  706399 round_trippers.go:580]     Audit-Id: 78f37d85-d255-498a-97ae-7e7ffea71734
	I1218 11:53:33.664547  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.664552  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.664558  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.664563  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.664871  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:33.665288  706399 node_ready.go:49] node "multinode-107476" has status "Ready":"True"
	I1218 11:53:33.665314  706399 node_ready.go:38] duration metric: took 2.503992718s waiting for node "multinode-107476" to be "Ready" ...
	I1218 11:53:33.665324  706399 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:33.665384  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:33.665393  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.665400  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.665406  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.668975  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:33.668992  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.668998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.669004  706399 round_trippers.go:580]     Audit-Id: 616ca18a-8e53-464b-b8f7-fdc3a26f56e2
	I1218 11:53:33.669011  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.669016  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.669021  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.669026  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.670356  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"852"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83732 chars]
	I1218 11:53:33.672899  706399 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:33.672977  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:33.672986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.672993  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.672999  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.675712  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.675728  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.675743  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.675748  706399 round_trippers.go:580]     Audit-Id: eabe770a-6bc4-4dfc-b039-991ddbcade34
	I1218 11:53:33.675755  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.675760  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.675765  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.675771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.676383  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:33.676975  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.676993  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.677001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.677007  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.678858  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:33.678876  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.678885  706399 round_trippers.go:580]     Audit-Id: f39381a7-3505-48c5-8706-62a66b7c6d74
	I1218 11:53:33.678898  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.678907  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.678913  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.678918  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.678926  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.679219  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.173545  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.173574  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.173582  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.173588  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.177792  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:34.177814  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.177821  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.177827  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.177832  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.177837  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.177842  706399 round_trippers.go:580]     Audit-Id: e0c08780-2ccb-4466-ac60-0130be0e91bb
	I1218 11:53:34.177847  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.178197  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.178858  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.178877  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.178888  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.178898  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.182714  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.182734  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.182741  706399 round_trippers.go:580]     Audit-Id: 8aad3fb3-c28c-4741-bb51-1b599fc4d9a2
	I1218 11:53:34.182746  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.182751  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.182756  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.182761  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.182766  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.183249  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.674054  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.674087  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.674102  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.674111  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.677143  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.677168  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.677175  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.677181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.677191  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.677196  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.677201  706399 round_trippers.go:580]     Audit-Id: 705cd8df-0cf3-47cc-9898-d4f3cbf27fc1
	I1218 11:53:34.677206  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.677480  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.677955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.677969  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.677977  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.677983  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.680928  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:34.680951  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.680961  706399 round_trippers.go:580]     Audit-Id: 79e80102-7689-456c-968e-8b545873dcf0
	I1218 11:53:34.680969  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.680979  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.680992  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.681003  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.681011  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.681532  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.173215  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.173248  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.173257  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.173309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.176153  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.176175  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.176183  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.176190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.176199  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.176207  706399 round_trippers.go:580]     Audit-Id: 6b84f91f-f0e3-431d-b790-7a72f221660b
	I1218 11:53:35.176218  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.176227  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.176689  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.177270  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.177287  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.177295  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.177303  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.179670  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.179698  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.179705  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.179712  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.179720  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.179728  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.179735  706399 round_trippers.go:580]     Audit-Id: 4ebee61b-cc3b-47df-a387-697134152b33
	I1218 11:53:35.179744  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.179923  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.673560  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.673590  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.673599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.673605  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.676855  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:35.676885  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.676895  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.676903  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.676910  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.676917  706399 round_trippers.go:580]     Audit-Id: 29332715-2ca7-46d2-9eae-60bcc11a611d
	I1218 11:53:35.676923  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.676931  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.677062  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.677571  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.677588  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.677599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.677610  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.680478  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.680509  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.680519  706399 round_trippers.go:580]     Audit-Id: cf7fec49-5746-4ad8-ad95-44ddd5a46a7c
	I1218 11:53:35.680528  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.680537  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.680545  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.680552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.680560  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.680765  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.681145  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:36.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.173440  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.173448  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.179050  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:36.179081  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.179092  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.179127  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.179141  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.179149  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.179161  706399 round_trippers.go:580]     Audit-Id: 2262febf-a9c1-4185-a064-37f0e57229fd
	I1218 11:53:36.179173  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.179914  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.180600  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.180626  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.180638  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.180648  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.182832  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.182851  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.182859  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.182867  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.182874  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.182881  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.182890  706399 round_trippers.go:580]     Audit-Id: 7672883b-ce34-4c88-940d-e431e9489d5d
	I1218 11:53:36.182900  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.183021  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:36.673765  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.673797  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.673809  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.673816  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.676897  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:36.676920  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.676941  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.676948  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.676956  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.676963  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.676971  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.676985  706399 round_trippers.go:580]     Audit-Id: 4ce118c9-c9cf-42f4-ad28-24e77a8f8d0b
	I1218 11:53:36.677587  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.678050  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.678064  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.678073  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.678079  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.680488  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.680504  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.680513  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.680520  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.680528  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.680542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.680558  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.680567  706399 round_trippers.go:580]     Audit-Id: 329a463d-e9eb-4a48-941f-81cfd668cb20
	I1218 11:53:36.680745  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.173387  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.173415  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.173423  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.173430  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.176760  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.176789  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.176799  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.176807  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.176814  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.176822  706399 round_trippers.go:580]     Audit-Id: ccd32a2b-22a0-4e80-891a-798ae2e74751
	I1218 11:53:37.176830  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.176841  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.177566  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.178053  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.178066  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.178074  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.178080  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.180584  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.180606  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.180616  706399 round_trippers.go:580]     Audit-Id: 6b24dd36-f6bb-4b1e-bc13-bfdc9fcb3deb
	I1218 11:53:37.180624  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.180634  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.180644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.180660  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.180673  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.181042  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.673822  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.673855  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.673864  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.673870  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.676905  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.676930  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.676937  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.676943  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.676948  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.676953  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.676958  706399 round_trippers.go:580]     Audit-Id: c843db6c-febe-472d-9c6d-2c60ae326f9c
	I1218 11:53:37.676963  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.677455  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.677995  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.678010  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.678018  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.678024  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.680442  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.680462  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.680471  706399 round_trippers.go:580]     Audit-Id: 03d35d8e-3248-4e4a-aaa4-561ea5506445
	I1218 11:53:37.680479  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.680486  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.680494  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.680506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.680514  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.680764  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.173464  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.173495  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.173504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.173510  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.177182  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:38.177207  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.177217  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.177225  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.177231  706399 round_trippers.go:580]     Audit-Id: 8f5f4bf7-c666-4e33-9c29-fb899337e95e
	I1218 11:53:38.177238  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.177245  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.177252  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.177919  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.178418  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.178436  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.178444  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.178449  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.181432  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.181453  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.181463  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.181472  706399 round_trippers.go:580]     Audit-Id: cdef1a0d-7934-417d-b867-e54c5da5c288
	I1218 11:53:38.181480  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.181488  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.181497  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.181506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.182567  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.182937  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:38.673981  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.674003  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.674014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.674021  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.676858  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.676938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.676957  706399 round_trippers.go:580]     Audit-Id: 2ef42698-8375-41e0-83e7-e39f4386e551
	I1218 11:53:38.676967  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.676976  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.676982  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.676987  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.677194  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.677739  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.677756  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.677766  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.677775  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.680079  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.680104  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.680114  706399 round_trippers.go:580]     Audit-Id: cb04d605-5990-411c-bb61-d27a16eb40e0
	I1218 11:53:38.680122  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.680127  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.680132  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.680137  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.680142  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.680303  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.173689  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.173724  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.173735  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.173743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.176928  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:39.176956  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.176966  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.176974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.176991  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.176998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.177009  706399 round_trippers.go:580]     Audit-Id: 53d7f113-e0ab-4396-97c8-fac771a70baa
	I1218 11:53:39.177017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.177158  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.177635  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.177666  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.177677  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.177687  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.180115  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.180141  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.180152  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.180160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.180166  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.180174  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.180179  706399 round_trippers.go:580]     Audit-Id: 90d34db6-74ca-42f5-81d9-8222532758aa
	I1218 11:53:39.180196  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.180432  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.674135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.674165  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.674176  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.674185  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.676939  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.676965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.676974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.676990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.677000  706399 round_trippers.go:580]     Audit-Id: 501a9bb0-a4f9-46a1-b970-b27f1660227c
	I1218 11:53:39.677005  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.677011  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.677211  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.677746  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.677765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.677776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.677784  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.680008  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.680025  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.680032  706399 round_trippers.go:580]     Audit-Id: 649faefe-95f3-4ba5-944c-2b3ac4a04840
	I1218 11:53:39.680037  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.680042  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.680047  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.680059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.680064  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.680517  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.173280  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.173318  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.173330  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.173338  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.176226  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.176252  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.176260  706399 round_trippers.go:580]     Audit-Id: dcaceef1-cb4c-409d-9795-82135569a3f0
	I1218 11:53:40.176265  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.176271  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.176276  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.176281  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.176286  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.176500  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.177135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.177154  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.177166  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.177173  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.179445  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.179459  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.179466  706399 round_trippers.go:580]     Audit-Id: 9033ba3c-1dd2-4b09-8d85-34017bc0e26d
	I1218 11:53:40.179471  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.179476  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.179480  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.179486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.179491  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.179900  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.673585  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.673616  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.673624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.673630  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.676460  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.676486  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.676496  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.676505  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.676513  706399 round_trippers.go:580]     Audit-Id: 0ac8a2dd-4ed5-431e-9228-2726aad2faf3
	I1218 11:53:40.676522  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.676532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.676542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.676681  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.677282  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.677299  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.677309  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.677322  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.679390  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.679405  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.679411  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.679417  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.679422  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.679426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.679431  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.679437  706399 round_trippers.go:580]     Audit-Id: 3088a618-2697-41b5-b81f-673ab861df2d
	I1218 11:53:40.679674  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.680074  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:41.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.173438  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.173443  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.176266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.176289  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.176300  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.176315  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.176322  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.176336  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.176349  706399 round_trippers.go:580]     Audit-Id: 53e21024-a9d5-4eca-a522-b1244059f300
	I1218 11:53:41.176356  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.177028  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.177537  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.177553  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.177561  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.177570  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.179482  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:41.179501  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.179523  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.179532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.179542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.179552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.179564  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.179575  706399 round_trippers.go:580]     Audit-Id: 237d0bff-9402-489c-822c-431b43baeb0c
	I1218 11:53:41.179806  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:41.673434  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.673475  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.673481  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.676679  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:41.676701  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.676709  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.676715  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.676720  706399 round_trippers.go:580]     Audit-Id: 5e3f13c1-c640-47db-98ab-31b91f950abc
	I1218 11:53:41.676725  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.676731  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.676736  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.677002  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.677473  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.677493  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.677504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.677512  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.679823  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.679840  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.679847  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.679852  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.679857  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.679862  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.679867  706399 round_trippers.go:580]     Audit-Id: c58e98cd-5718-47f9-b671-de3e227e7f8a
	I1218 11:53:41.679880  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.680038  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.173754  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.173792  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.173801  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.173807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.176269  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.176291  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.176307  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.176315  706399 round_trippers.go:580]     Audit-Id: f068f51e-93e6-4b4b-8a24-382d1325b363
	I1218 11:53:42.176324  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.176333  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.176343  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.176352  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.176513  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.176990  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.177006  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.177016  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.177025  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.179154  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.179173  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.179184  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.179193  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.179200  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.179208  706399 round_trippers.go:580]     Audit-Id: 6bd1c020-71a6-4a7c-b496-e507683b71a1
	I1218 11:53:42.179214  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.179219  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.179368  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.674178  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.674211  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.674219  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.674225  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.676989  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.677019  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.677030  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.677039  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.677048  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.677057  706399 round_trippers.go:580]     Audit-Id: e7f3a1b6-10ed-4499-9e1f-e736dfc275de
	I1218 11:53:42.677069  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.677077  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.677226  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.677701  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.677715  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.677722  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.677728  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.679919  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.679944  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.679952  706399 round_trippers.go:580]     Audit-Id: b58dd22e-a294-44fd-a21e-73d9d8edf70c
	I1218 11:53:42.679958  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.679963  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.679968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.679974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.679979  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.680228  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.680665  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:43.173955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.173986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.173994  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.174000  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.179521  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:43.179550  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.179561  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.179571  706399 round_trippers.go:580]     Audit-Id: 361dfd5f-b3d7-4aee-a744-f1e5be8299ab
	I1218 11:53:43.179579  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.179587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.179597  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.179605  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.179840  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.180347  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.180364  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.180371  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.180377  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.182529  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:43.182552  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.182562  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.182571  706399 round_trippers.go:580]     Audit-Id: c293cdc2-7c87-4e68-b2af-879cb905970f
	I1218 11:53:43.182578  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.182587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.182594  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.182602  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.182772  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:43.673323  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.673355  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.673366  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.673375  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.676722  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.676752  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.676762  706399 round_trippers.go:580]     Audit-Id: 94473599-4289-4510-bb2d-43ba24b179f0
	I1218 11:53:43.676770  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.676778  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.676804  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.676819  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.676832  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.677037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.677593  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.677612  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.677624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.677633  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.680695  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.680718  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.680727  706399 round_trippers.go:580]     Audit-Id: bdae868b-fd96-4f89-9ccb-5dce584f6e62
	I1218 11:53:43.680737  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.680745  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.680753  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.680770  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.680778  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.681643  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.173868  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.173892  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.173900  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.173907  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.185903  706399 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1218 11:53:44.185939  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.185949  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.185957  706399 round_trippers.go:580]     Audit-Id: 0807c889-4f55-447d-909a-ec577df47c9f
	I1218 11:53:44.185964  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.185973  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.185981  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.185990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.186217  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:44.186803  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.186821  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.186829  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.186835  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.189463  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.189484  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.189494  706399 round_trippers.go:580]     Audit-Id: d76f03a9-c756-48da-8594-aa7191476ce1
	I1218 11:53:44.189502  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.189510  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.189519  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.189527  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.189536  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.189666  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.673257  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.673294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.673303  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.673309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.678016  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.678037  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.678044  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.678061  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.678066  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.678071  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.678076  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.678082  706399 round_trippers.go:580]     Audit-Id: ded65a70-0ef7-468a-8c23-d3584306f5ce
	I1218 11:53:44.678372  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1218 11:53:44.678912  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.678929  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.678936  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.678943  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.683034  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.683059  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.683068  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.683076  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.683085  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.683103  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.683116  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.683124  706399 round_trippers.go:580]     Audit-Id: dee3a488-a4c0-429c-a3d0-763057e3e6fa
	I1218 11:53:44.683810  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.684155  706399 pod_ready.go:92] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.684175  706399 pod_ready.go:81] duration metric: took 11.01125188s waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684185  706399 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:44.684260  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.684267  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.684273  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.686236  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:44.686257  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.686282  706399 round_trippers.go:580]     Audit-Id: 57a8ca26-0ed4-4f32-a864-04c5cde44f00
	I1218 11:53:44.686294  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.686304  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.686317  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.686324  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.686334  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.686465  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"860","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1218 11:53:44.686943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.686962  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.686969  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.686975  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.689166  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.689180  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.689186  706399 round_trippers.go:580]     Audit-Id: cf29250f-3957-4111-b39c-e51f822d2956
	I1218 11:53:44.689192  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.689196  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.689201  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.689206  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.689214  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.689316  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.689596  706399 pod_ready.go:92] pod "etcd-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.689612  706399 pod_ready.go:81] duration metric: took 5.418084ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689626  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689687  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:44.689696  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.689702  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.689708  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.692944  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:44.692965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.692974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.692983  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.692991  706399 round_trippers.go:580]     Audit-Id: 6c7bd0e3-c0dc-4d2f-8958-13828542872b
	I1218 11:53:44.692999  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.693007  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.693017  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.693306  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"856","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1218 11:53:44.693815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.693830  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.693837  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.693842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.696806  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.696825  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.696832  706399 round_trippers.go:580]     Audit-Id: 0bdd9f51-a776-465a-8a9e-1430d9ca51e2
	I1218 11:53:44.696837  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.696842  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.696846  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.696851  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.696856  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.697133  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.697438  706399 pod_ready.go:92] pod "kube-apiserver-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.697454  706399 pod_ready.go:81] duration metric: took 7.821649ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697463  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697538  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:44.697551  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.697563  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.697579  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.700370  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.700389  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.700399  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.700408  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.700415  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.700424  706399 round_trippers.go:580]     Audit-Id: a8f89c02-db62-4dfd-aeec-c6d8bec7c55d
	I1218 11:53:44.700432  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.700440  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.702801  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"851","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1218 11:53:44.703704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.703722  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.703731  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.703740  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.706249  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.706267  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.706274  706399 round_trippers.go:580]     Audit-Id: 7ca2ece9-43e8-49c0-b944-aa148d24246d
	I1218 11:53:44.706279  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.706284  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.706289  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.706295  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.706308  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.706518  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.706859  706399 pod_ready.go:92] pod "kube-controller-manager-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.706874  706399 pod_ready.go:81] duration metric: took 9.405069ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706885  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:44.706954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.706961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.706969  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.709895  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.709910  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.709916  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.709921  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.709926  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.709931  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.709936  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.709941  706399 round_trippers.go:580]     Audit-Id: 73ccb16b-4b09-4e96-9ff3-b6875d4dcebf
	I1218 11:53:44.710221  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:44.710653  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:44.710668  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.710679  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.710689  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.713326  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.713340  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.713347  706399 round_trippers.go:580]     Audit-Id: 51b3f6a6-746d-4c41-89de-3e3d10f2ac93
	I1218 11:53:44.713367  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.713375  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.713380  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.713385  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.713396  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.713985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:44.714201  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.714214  706399 pod_ready.go:81] duration metric: took 7.323276ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.714224  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.873723  706399 request.go:629] Waited for 159.413698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.873835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.873846  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.876813  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.876855  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.876866  706399 round_trippers.go:580]     Audit-Id: 80d37f73-516f-4df0-a715-29b05d26f212
	I1218 11:53:44.876872  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.876878  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.876883  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.876888  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.876895  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.877037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:45.074061  706399 request.go:629] Waited for 196.407368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074141  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.074148  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.074154  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.076973  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.077001  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.077013  706399 round_trippers.go:580]     Audit-Id: 0150f682-9003-42e6-95c5-4a92f0ba4920
	I1218 11:53:45.077022  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.077031  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.077040  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.077046  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.077051  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.077151  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:45.077554  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.077579  706399 pod_ready.go:81] duration metric: took 363.348514ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.077591  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.273746  706399 request.go:629] Waited for 196.06681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273821  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273827  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.273835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.273842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.276787  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.276809  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.276816  706399 round_trippers.go:580]     Audit-Id: a6700efe-44c9-4e0b-ab8b-4cceb94a69cc
	I1218 11:53:45.276825  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.276834  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.276842  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.276850  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.276859  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.277036  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"782","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1218 11:53:45.474033  706399 request.go:629] Waited for 196.438047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474131  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474142  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.474156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.474169  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.477824  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.477853  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.477864  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.477873  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.477880  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.477889  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.477897  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.477909  706399 round_trippers.go:580]     Audit-Id: c36a7602-f8f6-447c-85d1-76254cd38665
	I1218 11:53:45.478069  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.478494  706399 pod_ready.go:92] pod "kube-proxy-jf8kx" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.478515  706399 pod_ready.go:81] duration metric: took 400.917905ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.478525  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.673370  706399 request.go:629] Waited for 194.759725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673457  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.673471  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.673480  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.677105  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.677128  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.677137  706399 round_trippers.go:580]     Audit-Id: 5a30c9ba-0617-498f-83e0-396ac7b0a17b
	I1218 11:53:45.677145  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.677153  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.677160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.677167  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.677180  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.677824  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"862","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1218 11:53:45.873712  706399 request.go:629] Waited for 195.397858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873812  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.873831  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.873837  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.876889  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.876911  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.876918  706399 round_trippers.go:580]     Audit-Id: bfe725d9-9c70-4dc7-bd45-d55e484f467a
	I1218 11:53:45.876924  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.876928  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.876933  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.876938  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.876943  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.877172  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.877490  706399 pod_ready.go:92] pod "kube-scheduler-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.877504  706399 pod_ready.go:81] duration metric: took 398.969668ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.877517  706399 pod_ready.go:38] duration metric: took 12.212180593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:45.877535  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:45.877585  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:45.893465  706399 command_runner.go:130] > 1729
	I1218 11:53:45.893561  706399 api_server.go:72] duration metric: took 14.907630232s to wait for apiserver process to appear ...
	I1218 11:53:45.893577  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:45.893601  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:45.899790  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:45.899867  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:45.899873  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.899881  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.899887  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.901094  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:45.901120  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.901128  706399 round_trippers.go:580]     Audit-Id: 7b85e82b-ec64-4584-8946-326f560ec5fc
	I1218 11:53:45.901134  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.901139  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.901145  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.901150  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.901156  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:45.901164  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.901186  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:45.901243  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:45.901259  706399 api_server.go:131] duration metric: took 7.675448ms to wait for apiserver health ...
	I1218 11:53:45.901267  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:46.073761  706399 request.go:629] Waited for 172.377393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073824  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073837  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.073845  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.073851  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.078255  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.078283  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.078291  706399 round_trippers.go:580]     Audit-Id: 8a8a3f91-2b40-4ed6-8673-2e9287ce0bf7
	I1218 11:53:46.078296  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.078302  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.078307  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.078312  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.078317  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.079532  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.083180  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:46.083218  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.083226  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.083231  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.083237  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.083242  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.083248  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.083263  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.083274  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.083283  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.083290  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.083299  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.083306  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.083317  706399 system_pods.go:74] duration metric: took 182.043479ms to wait for pod list to return data ...
	I1218 11:53:46.083328  706399 default_sa.go:34] waiting for default service account to be created ...
	I1218 11:53:46.273839  706399 request.go:629] Waited for 190.41018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273914  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273919  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.273928  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.273934  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.277176  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.277201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.277209  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.277219  706399 round_trippers.go:580]     Content-Length: 261
	I1218 11:53:46.277227  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.277236  706399 round_trippers.go:580]     Audit-Id: 8fb527bf-40a9-449e-b359-393d44708047
	I1218 11:53:46.277245  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.277251  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.277260  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.277289  706399 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d939767d-22df-4871-b1e9-1f264cd78bb5","resourceVersion":"351","creationTimestamp":"2023-12-18T11:49:29Z"}}]}
	I1218 11:53:46.277563  706399 default_sa.go:45] found service account: "default"
	I1218 11:53:46.277611  706399 default_sa.go:55] duration metric: took 194.253503ms for default service account to be created ...
	I1218 11:53:46.277627  706399 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 11:53:46.474114  706399 request.go:629] Waited for 196.394547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474195  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474203  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.474215  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.474228  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.478438  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.478468  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.478479  706399 round_trippers.go:580]     Audit-Id: bb45fe89-dded-417e-8392-f9b3d76b81f5
	I1218 11:53:46.478488  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.478496  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.478505  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.478512  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.478528  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.479114  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.481559  706399 system_pods.go:86] 12 kube-system pods found
	I1218 11:53:46.481584  706399 system_pods.go:89] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.481592  706399 system_pods.go:89] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.481599  706399 system_pods.go:89] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.481605  706399 system_pods.go:89] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.481610  706399 system_pods.go:89] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.481619  706399 system_pods.go:89] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.481627  706399 system_pods.go:89] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.481634  706399 system_pods.go:89] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.481643  706399 system_pods.go:89] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.481651  706399 system_pods.go:89] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.481658  706399 system_pods.go:89] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.481667  706399 system_pods.go:89] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.481677  706399 system_pods.go:126] duration metric: took 204.042426ms to wait for k8s-apps to be running ...
	I1218 11:53:46.481690  706399 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 11:53:46.481747  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:46.496708  706399 system_svc.go:56] duration metric: took 15.008248ms WaitForService to wait for kubelet.
	I1218 11:53:46.496742  706399 kubeadm.go:581] duration metric: took 15.510812865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 11:53:46.496766  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:46.674277  706399 request.go:629] Waited for 177.41815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674357  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674362  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.674418  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.674489  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.677744  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.677763  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.677771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.677777  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.677783  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.677788  706399 round_trippers.go:580]     Audit-Id: 127b003d-0ea0-41a7-833f-6b9650904cf1
	I1218 11:53:46.677794  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.677803  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.678201  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14648 chars]
	I1218 11:53:46.678828  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678850  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678863  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678867  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678872  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678875  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678879  706399 node_conditions.go:105] duration metric: took 182.108972ms to run NodePressure ...
	I1218 11:53:46.678892  706399 start.go:228] waiting for startup goroutines ...
	I1218 11:53:46.678901  706399 start.go:233] waiting for cluster config update ...
	I1218 11:53:46.678914  706399 start.go:242] writing updated cluster config ...
	I1218 11:53:46.679419  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:46.679525  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.683229  706399 out.go:177] * Starting worker node multinode-107476-m02 in cluster multinode-107476
	I1218 11:53:46.684696  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:46.684730  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:53:46.684832  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:53:46.684846  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:53:46.684979  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.685210  706399 start.go:365] acquiring machines lock for multinode-107476-m02: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:53:46.685261  706399 start.go:369] acquired machines lock for "multinode-107476-m02" in 28.185µs
	I1218 11:53:46.685282  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:53:46.685293  706399 fix.go:54] fixHost starting: m02
	I1218 11:53:46.685600  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:53:46.685626  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:53:46.700004  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I1218 11:53:46.700443  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:53:46.700912  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:53:46.700933  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:53:46.701277  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:53:46.701452  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:53:46.701622  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:53:46.703098  706399 fix.go:102] recreateIfNeeded on multinode-107476-m02: state=Stopped err=<nil>
	I1218 11:53:46.703120  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	W1218 11:53:46.703304  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:53:46.705286  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476-m02" ...
	I1218 11:53:46.706596  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .Start
	I1218 11:53:46.706784  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring networks are active...
	I1218 11:53:46.707411  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network default is active
	I1218 11:53:46.707790  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network mk-multinode-107476 is active
	I1218 11:53:46.708193  706399 main.go:141] libmachine: (multinode-107476-m02) Getting domain xml...
	I1218 11:53:46.708862  706399 main.go:141] libmachine: (multinode-107476-m02) Creating domain...
	I1218 11:53:47.936995  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting to get IP...
	I1218 11:53:47.937889  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:47.938288  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:47.938375  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:47.938256  706643 retry.go:31] will retry after 227.139333ms: waiting for machine to come up
	I1218 11:53:48.166820  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.167284  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.167314  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.167220  706643 retry.go:31] will retry after 375.610064ms: waiting for machine to come up
	I1218 11:53:48.544738  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.545081  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.545107  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.545047  706643 retry.go:31] will retry after 378.162219ms: waiting for machine to come up
	I1218 11:53:48.924609  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.925035  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.925066  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.924973  706643 retry.go:31] will retry after 372.216471ms: waiting for machine to come up
	I1218 11:53:49.298428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.298906  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.298931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.298873  706643 retry.go:31] will retry after 655.95423ms: waiting for machine to come up
	I1218 11:53:49.956567  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.957078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.957106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.957030  706643 retry.go:31] will retry after 860.476893ms: waiting for machine to come up
	I1218 11:53:50.819121  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:50.819479  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:50.819506  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:50.819449  706643 retry.go:31] will retry after 763.336427ms: waiting for machine to come up
	I1218 11:53:51.585019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:51.585507  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:51.585542  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:51.585441  706643 retry.go:31] will retry after 963.292989ms: waiting for machine to come up
	I1218 11:53:52.550108  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:52.550472  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:52.550529  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:52.550417  706643 retry.go:31] will retry after 1.166437684s: waiting for machine to come up
	I1218 11:53:53.718762  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:53.719219  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:53.719252  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:53.719160  706643 retry.go:31] will retry after 2.253762045s: waiting for machine to come up
	I1218 11:53:55.974428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:55.974863  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:55.974891  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:55.974822  706643 retry.go:31] will retry after 2.547747733s: waiting for machine to come up
	I1218 11:53:58.523817  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:58.524293  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:58.524342  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:58.524169  706643 retry.go:31] will retry after 2.214783254s: waiting for machine to come up
	I1218 11:54:00.740859  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:00.741279  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:54:00.741308  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:54:00.741245  706643 retry.go:31] will retry after 4.522253429s: waiting for machine to come up
	I1218 11:54:05.267134  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.267545  706399 main.go:141] libmachine: (multinode-107476-m02) Found IP for machine: 192.168.39.238
	I1218 11:54:05.267562  706399 main.go:141] libmachine: (multinode-107476-m02) Reserving static IP address...
	I1218 11:54:05.267572  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has current primary IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.268162  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.268198  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"}
	I1218 11:54:05.268217  706399 main.go:141] libmachine: (multinode-107476-m02) Reserved static IP address: 192.168.39.238
	I1218 11:54:05.268237  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting for SSH to be available...
	I1218 11:54:05.268253  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Getting to WaitForSSH function...
	I1218 11:54:05.270329  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270682  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.270713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270879  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH client type: external
	I1218 11:54:05.270921  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa (-rw-------)
	I1218 11:54:05.270945  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:54:05.270955  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | About to run SSH command:
	I1218 11:54:05.270967  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | exit 0
	I1218 11:54:05.359260  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | SSH cmd err, output: <nil>: 
	I1218 11:54:05.359669  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetConfigRaw
	I1218 11:54:05.360312  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.362713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363152  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.363183  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363469  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:54:05.363688  706399 machine.go:88] provisioning docker machine ...
	I1218 11:54:05.363708  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.363941  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364144  706399 buildroot.go:166] provisioning hostname "multinode-107476-m02"
	I1218 11:54:05.364165  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364403  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.366681  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.367106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367207  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.367386  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367524  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367640  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.367789  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.368264  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.368292  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476-m02 && echo "multinode-107476-m02" | sudo tee /etc/hostname
	I1218 11:54:05.497634  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476-m02
	
	I1218 11:54:05.497668  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.500537  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.500970  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.501003  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.501203  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.501432  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501618  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501779  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.501985  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.502309  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.502328  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:54:05.623703  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:54:05.623739  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:54:05.623762  706399 buildroot.go:174] setting up certificates
	I1218 11:54:05.623773  706399 provision.go:83] configureAuth start
	I1218 11:54:05.623782  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.624072  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.626748  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627115  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.627143  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627342  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.629559  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.629885  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.629931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.630011  706399 provision.go:138] copyHostCerts
	I1218 11:54:05.630042  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630074  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:54:05.630086  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630147  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:54:05.630219  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630242  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:54:05.630249  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630271  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:54:05.630313  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630328  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:54:05.630334  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630353  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:54:05.630395  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476-m02 san=[192.168.39.238 192.168.39.238 localhost 127.0.0.1 minikube multinode-107476-m02]
	I1218 11:54:05.741217  706399 provision.go:172] copyRemoteCerts
	I1218 11:54:05.741280  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:54:05.741305  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.744095  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744415  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.744451  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744641  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.744867  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.745081  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.745239  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:05.832540  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:54:05.832629  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:54:05.857130  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:54:05.857201  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1218 11:54:05.880270  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:54:05.880339  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 11:54:05.904290  706399 provision.go:86] duration metric: configureAuth took 280.501532ms
	I1218 11:54:05.904323  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:54:05.904615  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:54:05.904650  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.904939  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.907613  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.908060  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908259  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.908465  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908634  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908797  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.908991  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.909320  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.909336  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:54:06.025905  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:54:06.025936  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:54:06.026101  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:54:06.026127  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.029047  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029390  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.029429  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029644  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.029864  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030054  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030178  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.030331  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.030646  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.030705  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:54:06.156093  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:54:06.156134  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.159082  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159496  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.159528  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159684  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.159913  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160156  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160304  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.160478  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.160807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.160825  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:54:07.046577  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:54:07.046609  706399 machine.go:91] provisioned docker machine in 1.68290659s
	I1218 11:54:07.046627  706399 start.go:300] post-start starting for "multinode-107476-m02" (driver="kvm2")
	I1218 11:54:07.046641  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:54:07.046672  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.047004  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:54:07.047085  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.049936  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050337  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.050373  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050532  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.050720  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.050893  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.051075  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.137937  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:54:07.141965  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:54:07.141990  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:54:07.141996  706399 command_runner.go:130] > ID=buildroot
	I1218 11:54:07.142004  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:54:07.142016  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:54:07.142062  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:54:07.142079  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:54:07.142150  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:54:07.142249  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:54:07.142262  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:54:07.142338  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:54:07.150461  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:54:07.173512  706399 start.go:303] post-start completed in 126.867172ms
	I1218 11:54:07.173544  706399 fix.go:56] fixHost completed within 20.488252806s
	I1218 11:54:07.173567  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.176291  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176751  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.176783  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176950  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.177185  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177343  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177560  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.177727  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:07.178069  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:07.178084  706399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 11:54:07.292631  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900447.242005495
	
	I1218 11:54:07.292655  706399 fix.go:206] guest clock: 1702900447.242005495
	I1218 11:54:07.292662  706399 fix.go:219] Guest: 2023-12-18 11:54:07.242005495 +0000 UTC Remote: 2023-12-18 11:54:07.173548129 +0000 UTC m=+83.636906782 (delta=68.457366ms)
	I1218 11:54:07.292718  706399 fix.go:190] guest clock delta is within tolerance: 68.457366ms
	I1218 11:54:07.292725  706399 start.go:83] releasing machines lock for "multinode-107476-m02", held for 20.607451202s
	I1218 11:54:07.292751  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.293062  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:07.295732  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.296145  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.296179  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.298392  706399 out.go:177] * Found network options:
	I1218 11:54:07.299731  706399 out.go:177]   - NO_PROXY=192.168.39.124
	W1218 11:54:07.301071  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.301110  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301626  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301817  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301902  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:54:07.301942  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	W1218 11:54:07.302000  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.302076  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:54:07.302097  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.304593  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304845  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304987  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305018  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305124  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305254  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305278  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305303  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305455  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305523  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305617  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305681  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.305742  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305842  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.391351  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:54:07.412687  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:54:07.412710  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:54:07.412781  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:54:07.429410  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:54:07.429693  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:54:07.429717  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.429853  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.445443  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:54:07.445529  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:54:07.455706  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:54:07.465480  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:54:07.465531  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:54:07.475348  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.485332  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:54:07.495743  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.505751  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:54:07.515919  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:54:07.525808  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:54:07.534674  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:54:07.534812  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:54:07.544293  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:07.647636  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:54:07.664455  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.664544  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:54:07.678392  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:54:07.678419  706399 command_runner.go:130] > [Unit]
	I1218 11:54:07.678429  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:54:07.678438  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:54:07.678446  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:54:07.678454  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:54:07.678468  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:54:07.678475  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:54:07.678482  706399 command_runner.go:130] > [Service]
	I1218 11:54:07.678489  706399 command_runner.go:130] > Type=notify
	I1218 11:54:07.678499  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:54:07.678506  706399 command_runner.go:130] > Environment=NO_PROXY=192.168.39.124
	I1218 11:54:07.678522  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:54:07.678539  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:54:07.678552  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:54:07.678569  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:54:07.678579  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:54:07.678623  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:54:07.678642  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:54:07.678658  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:54:07.678672  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:54:07.678681  706399 command_runner.go:130] > ExecStart=
	I1218 11:54:07.678704  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:54:07.678716  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:54:07.678732  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:54:07.678739  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:54:07.678746  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:54:07.678750  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:54:07.678754  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:54:07.678759  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:54:07.678767  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:54:07.678773  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:54:07.678779  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:54:07.678786  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:54:07.678790  706399 command_runner.go:130] > Delegate=yes
	I1218 11:54:07.678797  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:54:07.678805  706399 command_runner.go:130] > KillMode=process
	I1218 11:54:07.678811  706399 command_runner.go:130] > [Install]
	I1218 11:54:07.678817  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:54:07.678881  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.699422  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:54:07.717253  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.729421  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.740150  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:54:07.771472  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.783922  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.801472  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:54:07.801565  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:54:07.805378  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:54:07.805607  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:54:07.814619  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:54:07.830501  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:54:07.940117  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:54:08.043122  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:54:08.043192  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:54:08.059638  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:08.160537  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:54:09.625721  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4651404s)
	I1218 11:54:09.625800  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.727037  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:54:09.837890  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.952084  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:10.068114  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:54:10.082662  706399 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1218 11:54:10.083512  706399 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1218 11:54:10.094378  706399 command_runner.go:130] > -- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	I1218 11:54:10.094403  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094413  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094426  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094437  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094447  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094463  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094476  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094488  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094501  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094509  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094518  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094526  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094544  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094553  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094561  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094570  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094579  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094587  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094596  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094607  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094618  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1218 11:54:10.094628  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1218 11:54:10.097238  706399 out.go:177] 
	W1218 11:54:10.099022  706399 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1218 11:54:10.099052  706399 out.go:239] * 
	W1218 11:54:10.099923  706399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 11:54:10.101451  706399 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-12-18 11:52:55 UTC, ends at Mon 2023-12-18 11:54:11 UTC. --
	Dec 18 11:53:31 multinode-107476 dockerd[827]: time="2023-12-18T11:53:31.027773875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:31 multinode-107476 dockerd[827]: time="2023-12-18T11:53:31.027863057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:31 multinode-107476 dockerd[827]: time="2023-12-18T11:53:31.027894145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525099629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525398559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525520470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525548881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544452319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544571854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544594058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544605950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 cri-dockerd[1042]: time="2023-12-18T11:53:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b509c1e475b06e9c062d47412c861219a821775adca61d3b54f342424644394/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 18 11:53:43 multinode-107476 cri-dockerd[1042]: time="2023-12-18T11:53:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6381f412cd287ea043ed1bc7bbea0281bf97248c6d11131123e855abb1ac8d9/resolv.conf as [nameserver 192.168.122.1]"
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.270951082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271351233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271543854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271763380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.289383236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.292535007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.294037889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.294399174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:58 multinode-107476 dockerd[821]: time="2023-12-18T11:53:58.368249488Z" level=info msg="ignoring event" container=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.368921160Z" level=info msg="shim disconnected" id=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 namespace=moby
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.369048203Z" level=warning msg="cleaning up after shim disconnected" id=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 namespace=moby
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.369060508Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4194bb8a74edb       ead0a4a53df89                                                                                         28 seconds ago      Running             coredns                   1                   d6381f412cd28       coredns-5dd5756b68-nl8xc
	c2db9601c5995       8c811b4aec35f                                                                                         28 seconds ago      Running             busybox                   1                   4b509c1e475b0       busybox-5bc68d56bd-sjq8b
	8f8819408c224       c7d1297425461                                                                                         41 seconds ago      Running             kindnet-cni               1                   8a3f2a24cd178       kindnet-6wlkb
	123ceedfce1cc       6e38f40d628db                                                                                         44 seconds ago      Exited              storage-provisioner       1                   5a2ed62879795       storage-provisioner
	f7a1971535c43       83f6cc407eed8                                                                                         44 seconds ago      Running             kube-proxy                1                   6999f04e162af       kube-proxy-jf8kx
	cdc0b5d46762e       73deb9a3f7025                                                                                         49 seconds ago      Running             etcd                      1                   3a312846e9f6f       etcd-multinode-107476
	b53866e4bc682       e3db313c6dbc0                                                                                         49 seconds ago      Running             kube-scheduler            1                   929d541b45df5       kube-scheduler-multinode-107476
	08bca6e395b93       7fe0e6f37db33                                                                                         49 seconds ago      Running             kube-apiserver            1                   41771edbf29b9       kube-apiserver-multinode-107476
	eb37efd287f8f       d058aa5ab969c                                                                                         50 seconds ago      Running             kube-controller-manager   1                   afb921712c653       kube-controller-manager-multinode-107476
	cb290feaafc5e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   3842d71341658       busybox-5bc68d56bd-sjq8b
	8a9a67bb77c43       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   a5499078bf2ca       coredns-5dd5756b68-nl8xc
	f6e3111557b6b       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   ecad224e7387c       kindnet-6wlkb
	9bd0f65050dcc       83f6cc407eed8                                                                                         4 minutes ago       Exited              kube-proxy                0                   ca78bca379ebe       kube-proxy-jf8kx
	367a10c5d07b5       e3db313c6dbc0                                                                                         5 minutes ago       Exited              kube-scheduler            0                   d06f419d4917c       kube-scheduler-multinode-107476
	fcaaf17b1eded       73deb9a3f7025                                                                                         5 minutes ago       Exited              etcd                      0                   7539f69199926       etcd-multinode-107476
	9226aa8cd1e99       7fe0e6f37db33                                                                                         5 minutes ago       Exited              kube-apiserver            0                   51c0e2b565115       kube-apiserver-multinode-107476
	4b66d146a3f47       d058aa5ab969c                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   49adada57ae16       kube-controller-manager-multinode-107476
	
	* 
	* ==> coredns [4194bb8a74ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48516 - 21045 "HINFO IN 6898711184610774818.5232844636684493161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020043569s
	
	* 
	* ==> coredns [8a9a67bb77c4] <==
	* [INFO] 10.244.0.3:39580 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001880469s
	[INFO] 10.244.0.3:45076 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136652s
	[INFO] 10.244.0.3:45753 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145352s
	[INFO] 10.244.0.3:38917 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001980062s
	[INFO] 10.244.0.3:52593 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071599s
	[INFO] 10.244.0.3:47945 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133323s
	[INFO] 10.244.0.3:51814 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069265s
	[INFO] 10.244.1.2:50202 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123562s
	[INFO] 10.244.1.2:45920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149315s
	[INFO] 10.244.1.2:37077 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00033414s
	[INFO] 10.244.1.2:42462 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098089s
	[INFO] 10.244.0.3:34819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102589s
	[INFO] 10.244.0.3:39334 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122985s
	[INFO] 10.244.0.3:36032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000044929s
	[INFO] 10.244.0.3:49808 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066623s
	[INFO] 10.244.1.2:58102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155245s
	[INFO] 10.244.1.2:52265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197453s
	[INFO] 10.244.1.2:51682 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000209848s
	[INFO] 10.244.1.2:46278 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000175008s
	[INFO] 10.244.0.3:46993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110094s
	[INFO] 10.244.0.3:54791 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086731s
	[INFO] 10.244.0.3:55681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008516s
	[INFO] 10.244.0.3:46353 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-107476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-107476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T11_49_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:49:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107476
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    multinode-107476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 676cd36f41bf41bfb2277224047042bb
	  System UUID:                676cd36f-41bf-41bf-b227-7224047042bb
	  Boot ID:                    b2d790b8-b563-4ca9-b85c-e8ef9f11b443
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-sjq8b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-5dd5756b68-nl8xc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m42s
	  kube-system                 etcd-multinode-107476                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-6wlkb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m42s
	  kube-system                 kube-apiserver-multinode-107476             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-multinode-107476    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-jf8kx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-scheduler-multinode-107476             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m40s                kube-proxy       
	  Normal  Starting                 43s                  kube-proxy       
	  Normal  Starting                 5m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m55s                kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m55s                kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s                kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m55s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m43s                node-controller  Node multinode-107476 event: Registered Node multinode-107476 in Controller
	  Normal  NodeReady                4m32s                kubelet          Node multinode-107476 status is now: NodeReady
	  Normal  Starting                 51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)    kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)    kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x7 over 51s)    kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           33s                  node-controller  Node multinode-107476 event: Registered Node multinode-107476 in Controller
	
	
	Name:               multinode-107476-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107476-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-107476
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_18T11_52_06_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107476-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    multinode-107476-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 835bc9952f0441a78a73352404b4fba8
	  System UUID:                835bc995-2f04-41a7-8a73-352404b4fba8
	  Boot ID:                    370f44f2-b022-4992-a457-7f0533c2bf00
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8dg4d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kindnet-l9h8d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m53s
	  kube-system                 kube-proxy-9xwh7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  RegisteredNode           3m53s                  node-controller  Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller
	  Normal  NodeHasSufficientMemory  3m53s (x5 over 3m55s)  kubelet          Node multinode-107476-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x5 over 3m55s)  kubelet          Node multinode-107476-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x5 over 3m55s)  kubelet          Node multinode-107476-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m39s                  kubelet          Node multinode-107476-m02 status is now: NodeReady
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller
	
	
	Name:               multinode-107476-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107476-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-107476
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_18T11_52_06_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:52:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107476-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:52:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:52:12 +0000   Mon, 18 Dec 2023 11:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:52:12 +0000   Mon, 18 Dec 2023 11:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:52:12 +0000   Mon, 18 Dec 2023 11:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:52:12 +0000   Mon, 18 Dec 2023 11:52:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.39
	  Hostname:    multinode-107476-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 59fa96e01c85481e86d2a36b7fcbdc18
	  System UUID:                59fa96e0-1c85-481e-86d2-a36b7fcbdc18
	  Boot ID:                    a7bbf2f7-bb87-4c20-8e4a-b12a52f809b7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8hrhv       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m54s
	  kube-system                 kube-proxy-ff4bs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m49s                  kube-proxy       
	  Normal  Starting                 2m5s                   kube-proxy       
	  Normal  Starting                 2m56s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m55s (x2 over 2m55s)  kubelet          Node multinode-107476-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s (x2 over 2m55s)  kubelet          Node multinode-107476-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m55s (x2 over 2m55s)  kubelet          Node multinode-107476-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                2m43s                  kubelet          Node multinode-107476-m03 status is now: NodeReady
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m7s (x2 over 2m7s)    kubelet          Node multinode-107476-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x2 over 2m7s)    kubelet          Node multinode-107476-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m7s (x2 over 2m7s)    kubelet          Node multinode-107476-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                119s                   kubelet          Node multinode-107476-m03 status is now: NodeReady
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-107476-m03 event: Registered Node multinode-107476-m03 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec18 11:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.374087] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.402052] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152571] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.620566] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec18 11:53] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.100222] systemd-fstab-generator[525]: Ignoring "noauto" for root device
	[  +1.237028] systemd-fstab-generator[748]: Ignoring "noauto" for root device
	[  +0.284846] systemd-fstab-generator[787]: Ignoring "noauto" for root device
	[  +0.111464] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +0.119413] systemd-fstab-generator[811]: Ignoring "noauto" for root device
	[  +1.565875] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.112671] systemd-fstab-generator[998]: Ignoring "noauto" for root device
	[  +0.105237] systemd-fstab-generator[1009]: Ignoring "noauto" for root device
	[  +0.108653] systemd-fstab-generator[1020]: Ignoring "noauto" for root device
	[  +0.118694] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[ +11.940809] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +0.411889] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.348862] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [cdc0b5d46762] <==
	* {"level":"info","ts":"2023-12-18T11:53:23.013673Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T11:53:23.013818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T11:53:23.014427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c switched to configuration voters=(15552116827903880748)"}
	{"level":"info","ts":"2023-12-18T11:53:23.016725Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","added-peer-id":"d7d437db3895ee2c","added-peer-peer-urls":["https://192.168.39.124:2380"]}
	{"level":"info","ts":"2023-12-18T11:53:23.017211Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T11:53:23.017611Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T11:53:23.031314Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-18T11:53:23.033683Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d7d437db3895ee2c","initial-advertise-peer-urls":["https://192.168.39.124:2380"],"listen-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.124:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-18T11:53:23.037338Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-18T11:53:23.038868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:53:23.042885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:53:24.354237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgPreVoteResp from d7d437db3895ee2c at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became candidate at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.354742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgVoteResp from d7d437db3895ee2c at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.354764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became leader at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.35478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7d437db3895ee2c elected leader d7d437db3895ee2c at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.356815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T11:53:24.356756Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7d437db3895ee2c","local-member-attributes":"{Name:multinode-107476 ClientURLs:[https://192.168.39.124:2379]}","request-path":"/0/members/d7d437db3895ee2c/attributes","cluster-id":"e1e7008e9cae601b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T11:53:24.358324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.124:2379"}
	{"level":"info","ts":"2023-12-18T11:53:24.35855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T11:53:24.359207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T11:53:24.359471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-18T11:53:24.359717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [fcaaf17b1ede] <==
	* WARNING: 2023/12/18 11:51:16 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-18T11:51:17.060176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.891718ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17162246747463988395 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-107476-m03\" mod_revision:619 > success:<request_put:<key:\"/registry/minions/multinode-107476-m03\" value_size:1988 >> failure:<request_range:<key:\"/registry/minions/multinode-107476-m03\" > >>","response":"size:2057"}
	{"level":"info","ts":"2023-12-18T11:51:17.060358Z","caller":"traceutil/trace.go:171","msg":"trace[822089819] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:657; }","duration":"654.470614ms","start":"2023-12-18T11:51:16.405877Z","end":"2023-12-18T11:51:17.060348Z","steps":["trace[822089819] 'read index received'  (duration: 211.139601ms)","trace[822089819] 'applied index is now lower than readState.Index'  (duration: 443.330596ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T11:51:17.060421Z","caller":"traceutil/trace.go:171","msg":"trace[1390095903] transaction","detail":"{read_only:false; number_of_response:1; response_revision:621; }","duration":"656.494913ms","start":"2023-12-18T11:51:16.403921Z","end":"2023-12-18T11:51:17.060416Z","steps":["trace[1390095903] 'process raft request'  (duration: 526.307885ms)","trace[1390095903] 'compare'  (duration: 129.823781ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T11:51:17.060461Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.403905Z","time spent":"656.530725ms","remote":"127.0.0.1:57216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2081,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-107476-m03\" mod_revision:619 > success:<request_put:<key:\"/registry/minions/multinode-107476-m03\" value_size:1988 >> failure:<request_range:<key:\"/registry/minions/multinode-107476-m03\" > >"}
	{"level":"warn","ts":"2023-12-18T11:51:17.060515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"654.631823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T11:51:17.060607Z","caller":"traceutil/trace.go:171","msg":"trace[1476158489] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:623; }","duration":"654.733302ms","start":"2023-12-18T11:51:16.405865Z","end":"2023-12-18T11:51:17.060598Z","steps":["trace[1476158489] 'agreement among raft nodes before linearized reading'  (duration: 654.571196ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.060643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.40586Z","time spent":"654.776199ms","remote":"127.0.0.1:57208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2023-12-18T11:51:17.060718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.48129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T11:51:17.060737Z","caller":"traceutil/trace.go:171","msg":"trace[1087470026] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:623; }","duration":"334.514544ms","start":"2023-12-18T11:51:16.726217Z","end":"2023-12-18T11:51:17.060732Z","steps":["trace[1087470026] 'agreement among raft nodes before linearized reading'  (duration: 334.465487ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.06076Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.726202Z","time spent":"334.55521ms","remote":"127.0.0.1:57170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-18T11:51:17.060935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.191265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-107476-m03\" ","response":"range_response_count:1 size:2046"}
	{"level":"info","ts":"2023-12-18T11:51:17.060974Z","caller":"traceutil/trace.go:171","msg":"trace[1957339670] range","detail":"{range_begin:/registry/minions/multinode-107476-m03; range_end:; response_count:1; response_revision:623; }","duration":"158.233634ms","start":"2023-12-18T11:51:16.902735Z","end":"2023-12-18T11:51:17.060968Z","steps":["trace[1957339670] 'agreement among raft nodes before linearized reading'  (duration: 158.115731ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.061057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.161237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-107476-m03\" ","response":"range_response_count:1 size:2046"}
	{"level":"info","ts":"2023-12-18T11:51:17.061077Z","caller":"traceutil/trace.go:171","msg":"trace[2088574323] range","detail":"{range_begin:/registry/minions/multinode-107476-m03; range_end:; response_count:1; response_revision:623; }","duration":"107.183789ms","start":"2023-12-18T11:51:16.953888Z","end":"2023-12-18T11:51:17.061072Z","steps":["trace[2088574323] 'agreement among raft nodes before linearized reading'  (duration: 107.13834ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:52:16.117679Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-18T11:52:16.117782Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-107476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
	{"level":"warn","ts":"2023-12-18T11:52:16.117997Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.11804Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.118128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.118182Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-18T11:52:16.160283Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7d437db3895ee2c","current-leader-member-id":"d7d437db3895ee2c"}
	{"level":"info","ts":"2023-12-18T11:52:16.163787Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:52:16.164184Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:52:16.164202Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-107476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
	
	* 
	* ==> kernel <==
	*  11:54:11 up 1 min,  0 users,  load average: 1.00, 0.38, 0.14
	Linux multinode-107476 5.10.57 #1 SMP Wed Dec 13 22:38:26 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [8f8819408c22] <==
	* I1218 11:53:32.109544       1 main.go:227] handling current node
	I1218 11:53:32.109687       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:53:32.109694       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:53:32.109868       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.238 Flags: [] Table: 0} 
	I1218 11:53:32.109928       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:53:32.109935       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:53:32.110066       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.39 Flags: [] Table: 0} 
	I1218 11:53:42.124517       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:53:42.124538       1 main.go:227] handling current node
	I1218 11:53:42.124548       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:53:42.124552       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:53:42.124643       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:53:42.124648       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:53:52.138453       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:53:52.138560       1 main.go:227] handling current node
	I1218 11:53:52.138581       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:53:52.138595       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:53:52.138774       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:53:52.139139       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:54:02.147368       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:54:02.147493       1 main.go:227] handling current node
	I1218 11:54:02.147521       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:54:02.147536       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:54:02.148012       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:54:02.148094       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [f6e3111557b6] <==
	* I1218 11:51:39.323178       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:39.323494       1 main.go:227] handling current node
	I1218 11:51:39.323523       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:39.323627       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:39.323918       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:39.324004       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:51:49.329153       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:49.329174       1 main.go:227] handling current node
	I1218 11:51:49.329183       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:49.329188       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:49.329299       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:49.329304       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:51:59.342400       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:59.342422       1 main.go:227] handling current node
	I1218 11:51:59.342431       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:59.342435       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:59.342789       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:59.342802       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:52:09.357725       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:52:09.357782       1 main.go:227] handling current node
	I1218 11:52:09.357821       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:52:09.357828       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:52:09.358052       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:52:09.358059       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:52:09.358104       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.39 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [08bca6e395b9] <==
	* I1218 11:53:25.739295       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1218 11:53:25.786267       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1218 11:53:25.786320       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1218 11:53:25.842501       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 11:53:25.888572       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1218 11:53:25.889029       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1218 11:53:25.889784       1 aggregator.go:166] initial CRD sync complete...
	I1218 11:53:25.889825       1 autoregister_controller.go:141] Starting autoregister controller
	I1218 11:53:25.889831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1218 11:53:25.889837       1 cache.go:39] Caches are synced for autoregister controller
	I1218 11:53:25.926322       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1218 11:53:25.926336       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1218 11:53:25.927884       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1218 11:53:25.929130       1 shared_informer.go:318] Caches are synced for configmaps
	I1218 11:53:25.930387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1218 11:53:25.932424       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1218 11:53:25.940866       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1218 11:53:26.726201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1218 11:53:28.878768       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1218 11:53:29.032760       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1218 11:53:29.041687       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1218 11:53:29.122180       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 11:53:29.136322       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1218 11:53:38.806663       1 controller.go:624] quota admission added evaluator for: endpoints
	I1218 11:53:38.851788       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [9226aa8cd1e9] <==
	* W1218 11:52:25.229759       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.289960       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.327898       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.358046       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.404339       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.436346       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.452155       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.466931       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.469389       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.481640       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.560045       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.603910       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.647070       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.650829       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.707089       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.727369       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.733174       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.821372       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.840843       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.860014       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.885965       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.889747       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.011046       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.017886       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.139780       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4b66d146a3f4] <==
	* I1218 11:50:35.473295       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1218 11:50:35.500019       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8dg4d"
	I1218 11:50:35.511638       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-sjq8b"
	I1218 11:50:35.534394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.148014ms"
	I1218 11:50:35.550663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.153665ms"
	I1218 11:50:35.551666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="438.537µs"
	I1218 11:50:35.565307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.182µs"
	I1218 11:50:35.569676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="161.273µs"
	I1218 11:50:39.403275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.228315ms"
	I1218 11:50:39.404080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="132.561µs"
	I1218 11:50:40.572326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.863353ms"
	I1218 11:50:40.572435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.488µs"
	I1218 11:51:16.395695       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:51:16.395903       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-107476-m03\" does not exist"
	I1218 11:51:16.724210       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107476-m03" podCIDRs=["10.244.2.0/24"]
	I1218 11:51:17.079444       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff4bs"
	I1218 11:51:17.079894       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8hrhv"
	I1218 11:51:18.551988       1 event.go:307] "Event occurred" object="multinode-107476-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m03 event: Registered Node multinode-107476-m03 in Controller"
	I1218 11:51:18.556078       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-107476-m03"
	I1218 11:51:28.437941       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:03.593492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:04.452239       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-107476-m03\" does not exist"
	I1218 11:52:04.455937       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:04.478011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107476-m03" podCIDRs=["10.244.3.0/24"]
	I1218 11:52:12.687766       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	
	* 
	* ==> kube-controller-manager [eb37efd287f8] <==
	* I1218 11:53:38.841110       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-107476-m02"
	I1218 11:53:38.841166       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-107476-m03"
	I1218 11:53:38.841411       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1218 11:53:38.841461       1 taint_manager.go:210] "Sending events to api server"
	I1218 11:53:38.843043       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1218 11:53:38.844603       1 event.go:307] "Event occurred" object="multinode-107476" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476 event: Registered Node multinode-107476 in Controller"
	I1218 11:53:38.844807       1 event.go:307] "Event occurred" object="multinode-107476-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller"
	I1218 11:53:38.844819       1 event.go:307] "Event occurred" object="multinode-107476-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m03 event: Registered Node multinode-107476-m03 in Controller"
	I1218 11:53:38.845602       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1218 11:53:38.845709       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1218 11:53:38.852796       1 shared_informer.go:318] Caches are synced for GC
	I1218 11:53:38.856476       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1218 11:53:38.906129       1 shared_informer.go:318] Caches are synced for attach detach
	I1218 11:53:38.926121       1 shared_informer.go:318] Caches are synced for stateful set
	I1218 11:53:38.962810       1 shared_informer.go:318] Caches are synced for daemon sets
	I1218 11:53:39.007122       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 11:53:39.043476       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 11:53:39.387873       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 11:53:39.390328       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 11:53:39.390377       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1218 11:53:44.207241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.012508ms"
	I1218 11:53:44.208423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.203µs"
	I1218 11:53:44.235406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.968µs"
	I1218 11:53:44.281516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.146942ms"
	I1218 11:53:44.281911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.705µs"
	
	* 
	* ==> kube-proxy [9bd0f65050dc] <==
	* I1218 11:49:31.222660       1 server_others.go:69] "Using iptables proxy"
	I1218 11:49:31.233090       1 node.go:141] Successfully retrieved node IP: 192.168.39.124
	I1218 11:49:31.272528       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 11:49:31.272850       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 11:49:31.276004       1 server_others.go:152] "Using iptables Proxier"
	I1218 11:49:31.276152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 11:49:31.276675       1 server.go:846] "Version info" version="v1.28.4"
	I1218 11:49:31.276713       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:49:31.277461       1 config.go:188] "Starting service config controller"
	I1218 11:49:31.277519       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 11:49:31.277893       1 config.go:97] "Starting endpoint slice config controller"
	I1218 11:49:31.278110       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 11:49:31.279088       1 config.go:315] "Starting node config controller"
	I1218 11:49:31.279128       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 11:49:31.377652       1 shared_informer.go:318] Caches are synced for service config
	I1218 11:49:31.378886       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 11:49:31.379292       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f7a1971535c4] <==
	* I1218 11:53:27.735307       1 server_others.go:69] "Using iptables proxy"
	I1218 11:53:27.761237       1 node.go:141] Successfully retrieved node IP: 192.168.39.124
	I1218 11:53:28.254132       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 11:53:28.254488       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 11:53:28.261044       1 server_others.go:152] "Using iptables Proxier"
	I1218 11:53:28.261584       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 11:53:28.262768       1 server.go:846] "Version info" version="v1.28.4"
	I1218 11:53:28.263033       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:53:28.264498       1 config.go:188] "Starting service config controller"
	I1218 11:53:28.265311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 11:53:28.265565       1 config.go:97] "Starting endpoint slice config controller"
	I1218 11:53:28.265643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 11:53:28.266498       1 config.go:315] "Starting node config controller"
	I1218 11:53:28.276097       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 11:53:28.276785       1 shared_informer.go:318] Caches are synced for node config
	I1218 11:53:28.366502       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 11:53:28.366545       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [367a10c5d07b] <==
	* W1218 11:49:13.938944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 11:49:13.939033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1218 11:49:14.010047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 11:49:14.010100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 11:49:14.102943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 11:49:14.102966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 11:49:14.155521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 11:49:14.155645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 11:49:14.232934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 11:49:14.232963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 11:49:14.270424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 11:49:14.270773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 11:49:14.335962       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 11:49:14.336231       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 11:49:14.356302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 11:49:14.356353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 11:49:14.439154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 11:49:14.439626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1218 11:49:14.452855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 11:49:14.453140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1218 11:49:17.083962       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 11:52:16.033269       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1218 11:52:16.033377       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1218 11:52:16.033788       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1218 11:52:16.034102       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [b53866e4bc68] <==
	* I1218 11:53:23.436781       1 serving.go:348] Generated self-signed cert in-memory
	W1218 11:53:25.831707       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1218 11:53:25.832162       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 11:53:25.832349       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1218 11:53:25.832425       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1218 11:53:25.868491       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1218 11:53:25.868618       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:53:25.871792       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1218 11:53:25.872151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1218 11:53:25.872592       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 11:53:25.874246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1218 11:53:25.973297       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 11:52:55 UTC, ends at Mon 2023-12-18 11:54:12 UTC. --
	Dec 18 11:53:27 multinode-107476 kubelet[1290]: I1218 11:53:27.752680    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a2ed628797950ad7707e306cd416c5cdf0c70ee962778398b6854bb1b19453c"
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.073349    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.073416    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:30.073401498 +0000 UTC m=+9.870107509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173701    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173732    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173779    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:30.173765772 +0000 UTC m=+9.970471783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.091805    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.092668    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:34.092646533 +0000 UTC m=+13.889352536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192257    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192316    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192364    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:34.192351041 +0000 UTC m=+13.989057052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.908177    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-nl8xc" podUID="17cd3c37-30e8-4d98-81f5-44f58135adf3"
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: I1218 11:53:30.908582    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3f2a24cd178f5c3f5a7b488f9fc08e20ab1568158a073df513cb48f1ad5398"
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.910521    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-sjq8b" podUID="6cb993f3-a977-45b8-a535-f0056d2d7e8b"
	Dec 18 11:53:32 multinode-107476 kubelet[1290]: E1218 11:53:32.556053    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-nl8xc" podUID="17cd3c37-30e8-4d98-81f5-44f58135adf3"
	Dec 18 11:53:32 multinode-107476 kubelet[1290]: E1218 11:53:32.556252    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-sjq8b" podUID="6cb993f3-a977-45b8-a535-f0056d2d7e8b"
	Dec 18 11:53:33 multinode-107476 kubelet[1290]: I1218 11:53:33.401542    1290 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.127648    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.128318    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:42.1282954 +0000 UTC m=+21.925001404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228689    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228758    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228810    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:42.228796297 +0000 UTC m=+22.025502309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: I1218 11:53:59.402656    1290 scope.go:117] "RemoveContainer" containerID="de7401b83d12863f008a4b978b770f3f7b4062c46372c4e00e2467eb6e5f0ba2"
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: I1218 11:53:59.405218    1290 scope.go:117] "RemoveContainer" containerID="123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08"
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: E1218 11:53:59.407333    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e04ec19d-39a8-4849-b604-8e46b7f9cea3)\"" pod="kube-system/storage-provisioner" podUID="e04ec19d-39a8-4849-b604-8e46b7f9cea3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-107476 -n multinode-107476
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-107476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (117.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 node delete m03
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr: exit status 2 (445.824758ms)

                                                
                                                
-- stdout --
	multinode-107476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107476-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:54:13.526345  706955 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:54:13.526625  706955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:54:13.526635  706955 out.go:309] Setting ErrFile to fd 2...
	I1218 11:54:13.526640  706955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:54:13.526823  706955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:54:13.526979  706955 out.go:303] Setting JSON to false
	I1218 11:54:13.527020  706955 mustload.go:65] Loading cluster: multinode-107476
	I1218 11:54:13.527064  706955 notify.go:220] Checking for updates...
	I1218 11:54:13.527394  706955 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:54:13.527409  706955 status.go:255] checking status of multinode-107476 ...
	I1218 11:54:13.527860  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.527922  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.548173  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45357
	I1218 11:54:13.548601  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.549101  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.549124  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.549486  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.549685  706955 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:54:13.551283  706955 status.go:330] multinode-107476 host status = "Running" (err=<nil>)
	I1218 11:54:13.551300  706955 host.go:66] Checking if "multinode-107476" exists ...
	I1218 11:54:13.551564  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.551596  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.568042  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I1218 11:54:13.568445  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.569034  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.569065  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.569386  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.569569  706955 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:54:13.572327  706955 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:54:13.572759  706955 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:54:13.572794  706955 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:54:13.572918  706955 host.go:66] Checking if "multinode-107476" exists ...
	I1218 11:54:13.573342  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.573394  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.588133  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I1218 11:54:13.588530  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.589034  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.589051  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.589382  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.589564  706955 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:54:13.589767  706955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 11:54:13.589791  706955 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:54:13.592688  706955 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:54:13.593144  706955 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:54:13.593178  706955 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:54:13.593326  706955 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:54:13.593511  706955 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:54:13.593665  706955 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:54:13.593844  706955 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:54:13.691182  706955 ssh_runner.go:195] Run: systemctl --version
	I1218 11:54:13.697134  706955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:54:13.713687  706955 kubeconfig.go:92] found "multinode-107476" server: "https://192.168.39.124:8443"
	I1218 11:54:13.713722  706955 api_server.go:166] Checking apiserver status ...
	I1218 11:54:13.713766  706955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:54:13.727347  706955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1729/cgroup
	I1218 11:54:13.736561  706955 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podd249aa06177557dc7c27cc4c9fd3f8c4/08bca6e395b93539438581e2888214ec7db42cd8b1043d65051f99d8a0496802"
	I1218 11:54:13.736628  706955 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd249aa06177557dc7c27cc4c9fd3f8c4/08bca6e395b93539438581e2888214ec7db42cd8b1043d65051f99d8a0496802/freezer.state
	I1218 11:54:13.747018  706955 api_server.go:204] freezer state: "THAWED"
	I1218 11:54:13.747043  706955 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:54:13.753023  706955 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:54:13.753050  706955 status.go:421] multinode-107476 apiserver status = Running (err=<nil>)
	I1218 11:54:13.753066  706955 status.go:257] multinode-107476 status: &{Name:multinode-107476 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 11:54:13.753088  706955 status.go:255] checking status of multinode-107476-m02 ...
	I1218 11:54:13.753391  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.753440  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.768602  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37763
	I1218 11:54:13.769092  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.769595  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.769619  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.769938  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.770156  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:54:13.772019  706955 status.go:330] multinode-107476-m02 host status = "Running" (err=<nil>)
	I1218 11:54:13.772035  706955 host.go:66] Checking if "multinode-107476-m02" exists ...
	I1218 11:54:13.772338  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.772378  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.787277  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I1218 11:54:13.787703  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.788191  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.788220  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.788558  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.788775  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:13.791601  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:13.792006  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:53:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:13.792028  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:13.792175  706955 host.go:66] Checking if "multinode-107476-m02" exists ...
	I1218 11:54:13.792483  706955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:54:13.792521  706955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:54:13.807246  706955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I1218 11:54:13.807589  706955 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:54:13.808042  706955 main.go:141] libmachine: Using API Version  1
	I1218 11:54:13.808067  706955 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:54:13.808391  706955 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:54:13.808551  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:13.808711  706955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 11:54:13.808780  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:13.811320  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:13.811660  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:53:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:13.811705  706955 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:13.811827  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:13.812002  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:13.812140  706955 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:13.812284  706955 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:13.899296  706955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:54:13.911382  706955 status.go:257] multinode-107476-m02 status: &{Name:multinode-107476-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-107476 -n multinode-107476
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-107476 logs -n 25: (1.265237499s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476:/home/docker/cp-test_multinode-107476-m02_multinode-107476.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476 sudo cat                                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m02_multinode-107476.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03:/home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476-m03 sudo cat                                   | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp testdata/cp-test.txt                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476:/home/docker/cp-test_multinode-107476-m03_multinode-107476.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476 sudo cat                                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m03_multinode-107476.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt                       | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m02:/home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n                                                                 | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | multinode-107476-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-107476 ssh -n multinode-107476-m02 sudo cat                                   | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	|         | /home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-107476 node stop m03                                                          | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:51 UTC |
	| node    | multinode-107476 node start                                                             | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:51 UTC | 18 Dec 23 11:52 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-107476                                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC |                     |
	| stop    | -p multinode-107476                                                                     | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC | 18 Dec 23 11:52 UTC |
	| start   | -p multinode-107476                                                                     | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:52 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-107476                                                                | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:54 UTC |                     |
	| node    | multinode-107476 node delete                                                            | multinode-107476 | jenkins | v1.32.0 | 18 Dec 23 11:54 UTC | 18 Dec 23 11:54 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:52:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:52:43.588877  706399 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:52:43.589039  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589053  706399 out.go:309] Setting ErrFile to fd 2...
	I1218 11:52:43.589061  706399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:52:43.589245  706399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:52:43.589801  706399 out.go:303] Setting JSON to false
	I1218 11:52:43.590759  706399 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12910,"bootTime":1702887454,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:52:43.590822  706399 start.go:138] virtualization: kvm guest
	I1218 11:52:43.593457  706399 out.go:177] * [multinode-107476] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:52:43.595324  706399 notify.go:220] Checking for updates...
	I1218 11:52:43.595332  706399 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:52:43.597000  706399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:52:43.598742  706399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:52:43.600311  706399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:52:43.601844  706399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 11:52:43.603279  706399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:52:43.605238  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:52:43.605343  706399 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:52:43.605808  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.605854  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.620145  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I1218 11:52:43.620579  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.621112  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.621138  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.621497  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.621692  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.657009  706399 out.go:177] * Using the kvm2 driver based on existing profile
	I1218 11:52:43.658657  706399 start.go:298] selected driver: kvm2
	I1218 11:52:43.658673  706399 start.go:902] validating driver "kvm2" against &{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.658875  706399 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:52:43.659246  706399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.659332  706399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:52:43.674156  706399 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:52:43.674836  706399 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1218 11:52:43.674935  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:52:43.674959  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:52:43.674972  706399 start_flags.go:323] config:
	{Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false ist
io-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:52:43.675263  706399 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:52:43.677310  706399 out.go:177] * Starting control plane node multinode-107476 in cluster multinode-107476
	I1218 11:52:43.678882  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:52:43.678926  706399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:52:43.678945  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:52:43.679040  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:52:43.679053  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:52:43.679182  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:52:43.679387  706399 start.go:365] acquiring machines lock for multinode-107476: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:52:43.679439  706399 start.go:369] acquired machines lock for "multinode-107476" in 30.186µs
	I1218 11:52:43.679462  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:52:43.679473  706399 fix.go:54] fixHost starting: 
	I1218 11:52:43.679818  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:52:43.679872  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:52:43.693824  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1218 11:52:43.694215  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:52:43.694677  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:52:43.694699  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:52:43.695098  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:52:43.695284  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:52:43.695482  706399 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:52:43.697182  706399 fix.go:102] recreateIfNeeded on multinode-107476: state=Stopped err=<nil>
	I1218 11:52:43.697205  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	W1218 11:52:43.697378  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:52:43.699486  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476" ...
	I1218 11:52:43.701188  706399 main.go:141] libmachine: (multinode-107476) Calling .Start
	I1218 11:52:43.701381  706399 main.go:141] libmachine: (multinode-107476) Ensuring networks are active...
	I1218 11:52:43.702137  706399 main.go:141] libmachine: (multinode-107476) Ensuring network default is active
	I1218 11:52:43.702575  706399 main.go:141] libmachine: (multinode-107476) Ensuring network mk-multinode-107476 is active
	I1218 11:52:43.702882  706399 main.go:141] libmachine: (multinode-107476) Getting domain xml...
	I1218 11:52:43.703479  706399 main.go:141] libmachine: (multinode-107476) Creating domain...
	I1218 11:52:44.937955  706399 main.go:141] libmachine: (multinode-107476) Waiting to get IP...
	I1218 11:52:44.939039  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:44.939474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:44.939585  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:44.939441  706428 retry.go:31] will retry after 295.497233ms: waiting for machine to come up
	I1218 11:52:45.237103  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.237598  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.237650  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.237528  706428 retry.go:31] will retry after 241.852686ms: waiting for machine to come up
	I1218 11:52:45.481091  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.481474  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.481504  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.481425  706428 retry.go:31] will retry after 405.008398ms: waiting for machine to come up
	I1218 11:52:45.887993  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:45.888530  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:45.888561  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:45.888436  706428 retry.go:31] will retry after 596.878679ms: waiting for machine to come up
	I1218 11:52:46.487207  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.487686  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.487723  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.487646  706428 retry.go:31] will retry after 479.661609ms: waiting for machine to come up
	I1218 11:52:46.969331  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:46.969779  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:46.969813  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:46.969718  706428 retry.go:31] will retry after 695.785621ms: waiting for machine to come up
	I1218 11:52:47.666484  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:47.666895  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:47.666928  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:47.666826  706428 retry.go:31] will retry after 798.848059ms: waiting for machine to come up
	I1218 11:52:48.466719  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:48.467146  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:48.467178  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:48.467086  706428 retry.go:31] will retry after 1.485767878s: waiting for machine to come up
	I1218 11:52:49.954305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:49.954699  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:49.954749  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:49.954654  706428 retry.go:31] will retry after 1.819619299s: waiting for machine to come up
	I1218 11:52:51.776607  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:51.776992  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:51.777016  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:51.776952  706428 retry.go:31] will retry after 2.317000445s: waiting for machine to come up
	I1218 11:52:54.096025  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:54.096436  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:54.096462  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:54.096372  706428 retry.go:31] will retry after 2.107748825s: waiting for machine to come up
	I1218 11:52:56.206568  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:56.206940  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:56.206971  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:56.206886  706428 retry.go:31] will retry after 2.701224561s: waiting for machine to come up
	I1218 11:52:58.909780  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:52:58.910163  706399 main.go:141] libmachine: (multinode-107476) DBG | unable to find current IP address of domain multinode-107476 in network mk-multinode-107476
	I1218 11:52:58.910194  706399 main.go:141] libmachine: (multinode-107476) DBG | I1218 11:52:58.910118  706428 retry.go:31] will retry after 4.332174915s: waiting for machine to come up
	I1218 11:53:03.247678  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248150  706399 main.go:141] libmachine: (multinode-107476) Found IP for machine: 192.168.39.124
	I1218 11:53:03.248181  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has current primary IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.248192  706399 main.go:141] libmachine: (multinode-107476) Reserving static IP address...
	I1218 11:53:03.248681  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.248710  706399 main.go:141] libmachine: (multinode-107476) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476", mac: "52:54:00:4e:59:cb", ip: "192.168.39.124"}
	I1218 11:53:03.248725  706399 main.go:141] libmachine: (multinode-107476) Reserved static IP address: 192.168.39.124
	I1218 11:53:03.248735  706399 main.go:141] libmachine: (multinode-107476) DBG | Getting to WaitForSSH function...
	I1218 11:53:03.248752  706399 main.go:141] libmachine: (multinode-107476) Waiting for SSH to be available...
	I1218 11:53:03.250850  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251272  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.251305  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.251380  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH client type: external
	I1218 11:53:03.251431  706399 main.go:141] libmachine: (multinode-107476) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa (-rw-------)
	I1218 11:53:03.251495  706399 main.go:141] libmachine: (multinode-107476) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:53:03.251518  706399 main.go:141] libmachine: (multinode-107476) DBG | About to run SSH command:
	I1218 11:53:03.251537  706399 main.go:141] libmachine: (multinode-107476) DBG | exit 0
	I1218 11:53:03.347693  706399 main.go:141] libmachine: (multinode-107476) DBG | SSH cmd err, output: <nil>: 
	I1218 11:53:03.348069  706399 main.go:141] libmachine: (multinode-107476) Calling .GetConfigRaw
	I1218 11:53:03.348923  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.351464  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.351874  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.351906  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.352189  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:03.352408  706399 machine.go:88] provisioning docker machine ...
	I1218 11:53:03.352426  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.352628  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.352841  706399 buildroot.go:166] provisioning hostname "multinode-107476"
	I1218 11:53:03.352861  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.353044  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.355260  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355633  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.355665  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.355775  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.355965  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356114  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.356209  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.356327  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.356684  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.356702  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476 && echo "multinode-107476" | sudo tee /etc/hostname
	I1218 11:53:03.495478  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476
	
	I1218 11:53:03.495519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.498288  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.498747  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.498802  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.499026  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.499258  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499423  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.499560  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.499796  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.500102  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.500118  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:53:03.636275  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:53:03.636312  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:53:03.636332  706399 buildroot.go:174] setting up certificates
	I1218 11:53:03.636351  706399 provision.go:83] configureAuth start
	I1218 11:53:03.636370  706399 main.go:141] libmachine: (multinode-107476) Calling .GetMachineName
	I1218 11:53:03.636693  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:03.639303  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639759  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.639801  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.639935  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.641968  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642455  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.642483  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.642629  706399 provision.go:138] copyHostCerts
	I1218 11:53:03.642664  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642722  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:53:03.642737  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:53:03.642819  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:53:03.642933  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.642958  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:53:03.642970  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:53:03.643012  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:53:03.643087  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643118  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:53:03.643123  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:53:03.643155  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:53:03.643235  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476 san=[192.168.39.124 192.168.39.124 localhost 127.0.0.1 minikube multinode-107476]
	I1218 11:53:03.728895  706399 provision.go:172] copyRemoteCerts
	I1218 11:53:03.728965  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:53:03.728993  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.732532  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733011  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.733057  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.733166  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.733459  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.733658  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.733825  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:03.829438  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:53:03.829540  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:53:03.851440  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:53:03.851526  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1218 11:53:03.872997  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:53:03.873064  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1218 11:53:03.894126  706399 provision.go:86] duration metric: configureAuth took 257.762653ms
	I1218 11:53:03.894171  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:53:03.894430  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:03.894459  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:03.894777  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:03.897379  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897774  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:03.897800  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:03.897918  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:03.898164  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898354  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:03.898519  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:03.898720  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:03.899054  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:03.899067  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:53:04.029431  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:53:04.029454  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:53:04.029610  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:53:04.029643  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.032284  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032632  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.032657  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.032884  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.033092  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033244  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.033356  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.033497  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.033807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.033872  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:53:04.172200  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:53:04.172259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:04.175231  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175567  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:04.175603  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:04.175767  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:04.175973  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176163  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:04.176296  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:04.176471  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:04.176900  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:04.176921  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:53:05.124159  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:53:05.124189  706399 machine.go:91] provisioned docker machine in 1.771768968s
	I1218 11:53:05.124202  706399 start.go:300] post-start starting for "multinode-107476" (driver="kvm2")
	I1218 11:53:05.124213  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:53:05.124248  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.124618  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:53:05.124659  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.127177  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127511  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.127543  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.127822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.128019  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.128232  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.128365  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.221325  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:53:05.225431  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:53:05.225452  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:53:05.225458  706399 command_runner.go:130] > ID=buildroot
	I1218 11:53:05.225465  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:53:05.225470  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:53:05.225498  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:53:05.225513  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:53:05.225581  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:53:05.225689  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:53:05.225707  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:53:05.225825  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:53:05.234060  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:05.256308  706399 start.go:303] post-start completed in 132.091269ms
	I1218 11:53:05.256346  706399 fix.go:56] fixHost completed within 21.576872921s
	I1218 11:53:05.256378  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.259066  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259438  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.259467  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.259594  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.259822  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260000  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.260132  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.260300  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:53:05.260663  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I1218 11:53:05.260677  706399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 11:53:05.388710  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900385.336515708
	
	I1218 11:53:05.388739  706399 fix.go:206] guest clock: 1702900385.336515708
	I1218 11:53:05.388748  706399 fix.go:219] Guest: 2023-12-18 11:53:05.336515708 +0000 UTC Remote: 2023-12-18 11:53:05.256351307 +0000 UTC m=+21.719709962 (delta=80.164401ms)
	I1218 11:53:05.388776  706399 fix.go:190] guest clock delta is within tolerance: 80.164401ms
	I1218 11:53:05.388781  706399 start.go:83] releasing machines lock for "multinode-107476", held for 21.709329749s
	I1218 11:53:05.388800  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.389070  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:05.391842  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392255  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.392297  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.392448  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.392945  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393126  706399 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:53:05.393230  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:53:05.393297  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.393344  706399 ssh_runner.go:195] Run: cat /version.json
	I1218 11:53:05.393374  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:53:05.396053  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396366  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396390  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396415  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396575  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.396796  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.396908  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:05.396935  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:05.396951  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397108  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:53:05.397138  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.397245  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:53:05.397399  706399 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:53:05.397526  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:53:05.484417  706399 command_runner.go:130] > {"iso_version": "v1.32.1-1702490427-17765", "kicbase_version": "v0.0.42-1702394725-17761", "minikube_version": "v1.32.0", "commit": "2780c4af854905e5cd4b94dc93de1f9d00b9040d"}
	I1218 11:53:05.484584  706399 ssh_runner.go:195] Run: systemctl --version
	I1218 11:53:05.515488  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:53:05.515582  706399 command_runner.go:130] > systemd 247 (247)
	I1218 11:53:05.515612  706399 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1218 11:53:05.515721  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:53:05.522226  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:53:05.522290  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:53:05.522345  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:53:05.538265  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:53:05.538337  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:53:05.538357  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.538518  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.556555  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:53:05.556669  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:53:05.566263  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:53:05.575359  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:53:05.575428  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:53:05.584526  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.593691  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:53:05.602941  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:53:05.612320  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:53:05.621674  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:53:05.630899  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:53:05.639775  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:53:05.640003  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:53:05.648244  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:05.747265  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:53:05.764104  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:53:05.764197  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:53:05.781204  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:53:05.781232  706399 command_runner.go:130] > [Unit]
	I1218 11:53:05.781238  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:53:05.781249  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:53:05.781255  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:53:05.781260  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:53:05.781269  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:53:05.781273  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:53:05.781277  706399 command_runner.go:130] > [Service]
	I1218 11:53:05.781283  706399 command_runner.go:130] > Type=notify
	I1218 11:53:05.781287  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:53:05.781294  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:53:05.781305  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:53:05.781312  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:53:05.781321  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:53:05.781332  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:53:05.781338  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:53:05.781348  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:53:05.781360  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:53:05.781374  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:53:05.781380  706399 command_runner.go:130] > ExecStart=
	I1218 11:53:05.781395  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:53:05.781406  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:53:05.781420  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:53:05.781437  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:53:05.781448  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:53:05.781457  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:53:05.781466  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:53:05.781478  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:53:05.781489  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:53:05.781503  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:53:05.781510  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:53:05.781518  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:53:05.781524  706399 command_runner.go:130] > Delegate=yes
	I1218 11:53:05.781533  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:53:05.781540  706399 command_runner.go:130] > KillMode=process
	I1218 11:53:05.781546  706399 command_runner.go:130] > [Install]
	I1218 11:53:05.781565  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:53:05.781637  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.804433  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:53:05.824109  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:53:05.835893  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.847147  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:53:05.877224  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:53:05.889672  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:53:05.907426  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:53:05.907507  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:53:05.910712  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:53:05.911118  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:53:05.919164  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:53:05.935395  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:53:06.037158  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:53:06.143405  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:53:06.143544  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:53:06.160341  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:06.269342  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:53:07.733823  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.464413724s)
	I1218 11:53:07.733899  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:07.833594  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:53:07.945199  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:53:08.049248  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.158198  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:53:08.174701  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:53:08.276820  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1218 11:53:08.358434  706399 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1218 11:53:08.358505  706399 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1218 11:53:08.364441  706399 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1218 11:53:08.364463  706399 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1218 11:53:08.364470  706399 command_runner.go:130] > Device: 16h/22d	Inode: 833         Links: 1
	I1218 11:53:08.364476  706399 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1218 11:53:08.364488  706399 command_runner.go:130] > Access: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364496  706399 command_runner.go:130] > Modify: 2023-12-18 11:53:08.237952217 +0000
	I1218 11:53:08.364506  706399 command_runner.go:130] > Change: 2023-12-18 11:53:08.240952217 +0000
	I1218 11:53:08.364516  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:08.364858  706399 start.go:543] Will wait 60s for crictl version
	I1218 11:53:08.364931  706399 ssh_runner.go:195] Run: which crictl
	I1218 11:53:08.368876  706399 command_runner.go:130] > /usr/bin/crictl
	I1218 11:53:08.369038  706399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1218 11:53:08.420803  706399 command_runner.go:130] > Version:  0.1.0
	I1218 11:53:08.420827  706399 command_runner.go:130] > RuntimeName:  docker
	I1218 11:53:08.420831  706399 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1218 11:53:08.420836  706399 command_runner.go:130] > RuntimeApiVersion:  v1
	I1218 11:53:08.420859  706399 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1218 11:53:08.420916  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.449342  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.450610  706399 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1218 11:53:08.475832  706399 command_runner.go:130] > 24.0.7
	I1218 11:53:08.478214  706399 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1218 11:53:08.478259  706399 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:53:08.481071  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481405  706399 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:52:56 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:53:08.481434  706399 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:53:08.481669  706399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1218 11:53:08.485727  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.498500  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:08.498560  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.517432  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.517456  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.517461  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.517467  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.517472  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.517479  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.517488  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.517493  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.517498  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.517502  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.518427  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.518444  706399 docker.go:601] Images already preloaded, skipping extraction
	I1218 11:53:08.518497  706399 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1218 11:53:08.540045  706399 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I1218 11:53:08.540071  706399 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I1218 11:53:08.540079  706399 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I1218 11:53:08.540103  706399 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I1218 11:53:08.540112  706399 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1218 11:53:08.540125  706399 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1218 11:53:08.540143  706399 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1218 11:53:08.540151  706399 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1218 11:53:08.540160  706399 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1218 11:53:08.540172  706399 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1218 11:53:08.540915  706399 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1218 11:53:08.540940  706399 cache_images.go:84] Images are preloaded, skipping loading
	I1218 11:53:08.541003  706399 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1218 11:53:08.570799  706399 command_runner.go:130] > cgroupfs
	I1218 11:53:08.570938  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:08.570956  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:08.570983  706399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1218 11:53:08.571015  706399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-107476 NodeName:multinode-107476 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1218 11:53:08.571172  706399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-107476"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1218 11:53:08.571284  706399 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-107476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1218 11:53:08.571354  706399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1218 11:53:08.580283  706399 command_runner.go:130] > kubeadm
	I1218 11:53:08.580300  706399 command_runner.go:130] > kubectl
	I1218 11:53:08.580304  706399 command_runner.go:130] > kubelet
	I1218 11:53:08.580321  706399 binaries.go:44] Found k8s binaries, skipping transfer
	I1218 11:53:08.580377  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1218 11:53:08.588532  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1218 11:53:08.604728  706399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1218 11:53:08.620425  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1218 11:53:08.636780  706399 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I1218 11:53:08.640548  706399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1218 11:53:08.652739  706399 certs.go:56] Setting up /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476 for IP: 192.168.39.124
	I1218 11:53:08.652776  706399 certs.go:190] acquiring lock for shared ca certs: {Name:mkd1aed956519f14c4fcaee2b34a279c90e2b05a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:08.652956  706399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key
	I1218 11:53:08.653001  706399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key
	I1218 11:53:08.653075  706399 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key
	I1218 11:53:08.653122  706399 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key.9675f833
	I1218 11:53:08.653155  706399 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key
	I1218 11:53:08.653165  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1218 11:53:08.653181  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1218 11:53:08.653193  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1218 11:53:08.653201  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1218 11:53:08.653213  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1218 11:53:08.653222  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1218 11:53:08.653233  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1218 11:53:08.653244  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1218 11:53:08.653292  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem (1338 bytes)
	W1218 11:53:08.653316  706399 certs.go:433] ignoring /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739_empty.pem, impossibly tiny 0 bytes
	I1218 11:53:08.653332  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem (1675 bytes)
	I1218 11:53:08.653359  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem (1082 bytes)
	I1218 11:53:08.653383  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem (1123 bytes)
	I1218 11:53:08.653409  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem (1679 bytes)
	I1218 11:53:08.653448  706399 certs.go:437] found cert: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:53:08.653474  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.653489  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.653501  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem -> /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.654088  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1218 11:53:08.677424  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1218 11:53:08.700082  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1218 11:53:08.722631  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1218 11:53:08.744711  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1218 11:53:08.766872  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1218 11:53:08.789385  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1218 11:53:08.812077  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1218 11:53:08.834610  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /usr/share/ca-certificates/6907392.pem (1708 bytes)
	I1218 11:53:08.857333  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1218 11:53:08.879344  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/690739.pem --> /usr/share/ca-certificates/690739.pem (1338 bytes)
	I1218 11:53:08.901384  706399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1218 11:53:08.916780  706399 ssh_runner.go:195] Run: openssl version
	I1218 11:53:08.922282  706399 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1218 11:53:08.922341  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907392.pem && ln -fs /usr/share/ca-certificates/6907392.pem /etc/ssl/certs/6907392.pem"
	I1218 11:53:08.931642  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935749  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.935958  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec 18 11:35 /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.936017  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907392.pem
	I1218 11:53:08.941156  706399 command_runner.go:130] > 3ec20f2e
	I1218 11:53:08.941471  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6907392.pem /etc/ssl/certs/3ec20f2e.0"
	I1218 11:53:08.950462  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1218 11:53:08.959471  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963656  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.963960  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec 18 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.964002  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1218 11:53:08.969248  706399 command_runner.go:130] > b5213941
	I1218 11:53:08.969314  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1218 11:53:08.978275  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/690739.pem && ln -fs /usr/share/ca-certificates/690739.pem /etc/ssl/certs/690739.pem"
	I1218 11:53:08.987435  706399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991559  706399 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991833  706399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec 18 11:35 /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.991883  706399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/690739.pem
	I1218 11:53:08.997219  706399 command_runner.go:130] > 51391683
	I1218 11:53:08.997300  706399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/690739.pem /etc/ssl/certs/51391683.0"
	I1218 11:53:09.007519  706399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1218 11:53:09.011748  706399 command_runner.go:130] > ca.crt
	I1218 11:53:09.011764  706399 command_runner.go:130] > ca.key
	I1218 11:53:09.011769  706399 command_runner.go:130] > healthcheck-client.crt
	I1218 11:53:09.011773  706399 command_runner.go:130] > healthcheck-client.key
	I1218 11:53:09.011778  706399 command_runner.go:130] > peer.crt
	I1218 11:53:09.011782  706399 command_runner.go:130] > peer.key
	I1218 11:53:09.011786  706399 command_runner.go:130] > server.crt
	I1218 11:53:09.011793  706399 command_runner.go:130] > server.key
	I1218 11:53:09.011883  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1218 11:53:09.017731  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.017835  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1218 11:53:09.023186  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.023240  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1218 11:53:09.028589  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.028641  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1218 11:53:09.033905  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.033983  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1218 11:53:09.039296  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.039520  706399 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1218 11:53:09.044713  706399 command_runner.go:130] > Certificate will not expire
	I1218 11:53:09.044770  706399 kubeadm.go:404] StartCluster: {Name:multinode-107476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-107476 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.238 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.39 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:53:09.044901  706399 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:09.063644  706399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1218 11:53:09.072501  706399 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1218 11:53:09.072518  706399 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1218 11:53:09.072524  706399 command_runner.go:130] > /var/lib/minikube/etcd:
	I1218 11:53:09.072529  706399 command_runner.go:130] > member
	I1218 11:53:09.072549  706399 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1218 11:53:09.072562  706399 kubeadm.go:636] restartCluster start
	I1218 11:53:09.072621  706399 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1218 11:53:09.080707  706399 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.081213  706399 kubeconfig.go:135] verify returned: extract IP: "multinode-107476" does not appear in /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.081366  706399 kubeconfig.go:146] "multinode-107476" context is missing from /home/jenkins/minikube-integration/17824-683489/kubeconfig - will repair!
	I1218 11:53:09.081646  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:09.082090  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:09.082328  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:09.082929  706399 cert_rotation.go:137] Starting client certificate rotation controller
	I1218 11:53:09.083156  706399 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1218 11:53:09.090938  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.090982  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.101227  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:09.591919  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:09.592030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:09.603387  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.091928  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.092030  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.103288  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:10.591906  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:10.592032  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:10.602954  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.091515  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.091641  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.103090  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:11.591669  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:11.591804  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:11.603393  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.092006  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.092105  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.103893  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:12.591441  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:12.591518  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:12.602651  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.091237  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.091369  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.103118  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:13.590973  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:13.592383  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:13.603723  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.091222  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.091346  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.102533  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:14.591068  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:14.591166  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:14.602318  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.091932  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.092046  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.103581  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:15.591099  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:15.591204  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:15.602422  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.091999  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.092095  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.103457  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:16.591070  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:16.591174  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:16.602679  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.091238  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.091370  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.103125  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:17.591667  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:17.591745  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:17.602974  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.091582  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.091718  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.103155  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:18.591946  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:18.592225  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:18.603460  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.091322  706399 api_server.go:166] Checking apiserver status ...
	I1218 11:53:19.091400  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1218 11:53:19.102630  706399 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1218 11:53:19.102658  706399 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1218 11:53:19.102668  706399 kubeadm.go:1135] stopping kube-system containers ...
	I1218 11:53:19.102726  706399 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1218 11:53:19.126882  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.126909  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.126915  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.126921  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.126928  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.126934  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.126939  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.126946  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.126952  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.126961  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.126966  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.126975  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.126982  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.126996  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.127005  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.127012  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.127994  706399 docker.go:469] Stopping containers: [8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992]
	I1218 11:53:19.128071  706399 ssh_runner.go:195] Run: docker stop 8a9a67bb77c4 de7401b83d12 fecf0ace453c a5499078bf2c f6e3111557b6 9bd0f65050dc ecad224e7387 ca78bca379eb 367a10c5d07b fcaaf17b1ede 9226aa8cd1e9 4b66d146a3f4 d06f419d4917 49adada57ae1 51c0e2b56511 7539f6919992
	I1218 11:53:19.146845  706399 command_runner.go:130] > 8a9a67bb77c4
	I1218 11:53:19.146887  706399 command_runner.go:130] > de7401b83d12
	I1218 11:53:19.146894  706399 command_runner.go:130] > fecf0ace453c
	I1218 11:53:19.148422  706399 command_runner.go:130] > a5499078bf2c
	I1218 11:53:19.148444  706399 command_runner.go:130] > f6e3111557b6
	I1218 11:53:19.148709  706399 command_runner.go:130] > 9bd0f65050dc
	I1218 11:53:19.148746  706399 command_runner.go:130] > ecad224e7387
	I1218 11:53:19.150621  706399 command_runner.go:130] > ca78bca379eb
	I1218 11:53:19.150979  706399 command_runner.go:130] > 367a10c5d07b
	I1218 11:53:19.150995  706399 command_runner.go:130] > fcaaf17b1ede
	I1218 11:53:19.151009  706399 command_runner.go:130] > 9226aa8cd1e9
	I1218 11:53:19.151182  706399 command_runner.go:130] > 4b66d146a3f4
	I1218 11:53:19.151421  706399 command_runner.go:130] > d06f419d4917
	I1218 11:53:19.151682  706399 command_runner.go:130] > 49adada57ae1
	I1218 11:53:19.151693  706399 command_runner.go:130] > 51c0e2b56511
	I1218 11:53:19.151697  706399 command_runner.go:130] > 7539f6919992
	I1218 11:53:19.152748  706399 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1218 11:53:19.167208  706399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1218 11:53:19.175617  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1218 11:53:19.175659  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1218 11:53:19.175670  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1218 11:53:19.175682  706399 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175764  706399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1218 11:53:19.175829  706399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184086  706399 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1218 11:53:19.184108  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.290255  706399 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1218 11:53:19.290616  706399 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1218 11:53:19.291271  706399 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1218 11:53:19.291767  706399 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1218 11:53:19.292523  706399 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1218 11:53:19.293290  706399 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1218 11:53:19.294173  706399 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1218 11:53:19.294659  706399 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1218 11:53:19.295268  706399 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1218 11:53:19.295750  706399 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1218 11:53:19.296399  706399 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1218 11:53:19.297138  706399 command_runner.go:130] > [certs] Using the existing "sa" key
	I1218 11:53:19.298557  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:19.350785  706399 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1218 11:53:19.458190  706399 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1218 11:53:19.753510  706399 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1218 11:53:19.917725  706399 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1218 11:53:20.041823  706399 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1218 11:53:20.044334  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.111720  706399 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1218 11:53:20.113879  706399 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1218 11:53:20.113900  706399 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1218 11:53:20.233250  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.333464  706399 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1218 11:53:20.333508  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1218 11:53:20.333519  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1218 11:53:20.333529  706399 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1218 11:53:20.333603  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:20.388000  706399 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1218 11:53:20.403526  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:20.403632  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:20.904600  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.403801  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:21.904580  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.403835  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.903754  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:22.917660  706399 command_runner.go:130] > 1729
	I1218 11:53:22.922833  706399 api_server.go:72] duration metric: took 2.519305176s to wait for apiserver process to appear ...
	I1218 11:53:22.922860  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:22.922886  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:22.923542  706399 api_server.go:269] stopped: https://192.168.39.124:8443/healthz: Get "https://192.168.39.124:8443/healthz": dial tcp 192.168.39.124:8443: connect: connection refused
	I1218 11:53:23.423182  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.843152  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1218 11:53:25.843187  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1218 11:53:25.843205  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.909873  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.909925  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:25.922999  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:25.929359  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:25.929386  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.422960  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.428892  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.428928  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:26.923578  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:26.931290  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1218 11:53:26.931325  706399 api_server.go:103] status: https://192.168.39.124:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1218 11:53:27.423966  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:27.429135  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:27.429243  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:27.429252  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:27.429261  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:27.429267  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:27.437137  706399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1218 11:53:27.437163  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:27.437172  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:27.437179  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:27.437187  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:27 GMT
	I1218 11:53:27.437194  706399 round_trippers.go:580]     Audit-Id: e12ea9f6-c15b-4448-831c-e69c87f78e83
	I1218 11:53:27.437211  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:27.437223  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:27.437234  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:27.437262  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:27.437348  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:27.437371  706399 api_server.go:131] duration metric: took 4.514501797s to wait for apiserver health ...
	I1218 11:53:27.437384  706399 cni.go:84] Creating CNI manager for ""
	I1218 11:53:27.437394  706399 cni.go:136] 3 nodes found, recommending kindnet
	I1218 11:53:27.439521  706399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1218 11:53:27.441036  706399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1218 11:53:27.450911  706399 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1218 11:53:27.450934  706399 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I1218 11:53:27.450953  706399 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1218 11:53:27.450964  706399 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1218 11:53:27.450981  706399 command_runner.go:130] > Access: 2023-12-18 11:52:56.552952217 +0000
	I1218 11:53:27.450993  706399 command_runner.go:130] > Modify: 2023-12-13 23:27:31.000000000 +0000
	I1218 11:53:27.451003  706399 command_runner.go:130] > Change: 2023-12-18 11:52:54.793952217 +0000
	I1218 11:53:27.451013  706399 command_runner.go:130] >  Birth: -
	I1218 11:53:27.458216  706399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1218 11:53:27.458236  706399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1218 11:53:27.509185  706399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1218 11:53:28.905245  706399 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.912521  706399 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1218 11:53:28.916523  706399 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1218 11:53:28.934945  706399 command_runner.go:130] > daemonset.apps/kindnet configured
	I1218 11:53:28.940934  706399 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.431702924s)
	I1218 11:53:28.940965  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:28.941087  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:28.941101  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.941113  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.941123  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.945051  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:28.945076  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.945086  706399 round_trippers.go:580]     Audit-Id: 6c622874-25a6-4b96-9b2e-4f49b904ff51
	I1218 11:53:28.945094  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.945102  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.945110  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.945118  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.945126  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.946529  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:28.950707  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:28.950736  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1218 11:53:28.950745  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1218 11:53:28.950751  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1218 11:53:28.950756  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:28.950760  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:28.950766  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1218 11:53:28.950775  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1218 11:53:28.950782  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:28.950792  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:28.950800  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1218 11:53:28.950809  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1218 11:53:28.950824  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1218 11:53:28.950832  706399 system_pods.go:74] duration metric: took 9.862056ms to wait for pod list to return data ...
	I1218 11:53:28.950839  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:28.950909  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:28.950918  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:28.950925  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:28.950931  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:28.953444  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:28.953475  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:28.953487  706399 round_trippers.go:580]     Audit-Id: 0d66de6b-1b8d-4012-9156-1fa20bb81935
	I1218 11:53:28.953495  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:28.953501  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:28.953508  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:28.953513  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:28.953519  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:28 GMT
	I1218 11:53:28.953797  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"777"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14775 chars]
	I1218 11:53:28.954628  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954655  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954667  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954671  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954677  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:28.954684  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:28.954690  706399 node_conditions.go:105] duration metric: took 3.843221ms to run NodePressure ...
	I1218 11:53:28.954714  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1218 11:53:29.198463  706399 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1218 11:53:29.198489  706399 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1218 11:53:29.198613  706399 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1218 11:53:29.198764  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1218 11:53:29.198778  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.198790  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.198807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.202177  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.202201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.202208  706399 round_trippers.go:580]     Audit-Id: 19d0d8d5-e9c5-4d32-b655-9ad8a4c44da9
	I1218 11:53:29.202213  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.202218  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.202223  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.202228  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.202233  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.203368  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I1218 11:53:29.204464  706399 kubeadm.go:787] kubelet initialised
	I1218 11:53:29.204488  706399 kubeadm.go:788] duration metric: took 5.842944ms waiting for restarted kubelet to initialise ...
	I1218 11:53:29.204498  706399 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:29.204573  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:29.204584  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.204595  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.204613  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.208130  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:29.208151  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.208159  706399 round_trippers.go:580]     Audit-Id: 450b4722-b778-4d0a-aede-ee77ca9c229c
	I1218 11:53:29.208165  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.208171  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.208176  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.208181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.208208  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.209329  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"779"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84584 chars]
	I1218 11:53:29.211875  706399 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.211970  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:29.211980  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.211991  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.212001  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.214577  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.214596  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.214603  706399 round_trippers.go:580]     Audit-Id: 9385acf6-1b01-4b3d-928c-439fe28d4f97
	I1218 11:53:29.214608  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.214613  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.214618  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.214623  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.214627  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.215229  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:29.215743  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.215765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.215776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.215783  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.217921  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.217938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.217944  706399 round_trippers.go:580]     Audit-Id: 8a8697ed-9283-4bdf-9239-28520f9f9b9f
	I1218 11:53:29.217950  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.217958  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.217968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.217977  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.217988  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.218120  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.218457  706399 pod_ready.go:97] node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218478  706399 pod_ready.go:81] duration metric: took 6.581675ms waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.218492  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.218502  706399 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.218551  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:29.218558  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.218572  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.218585  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.220388  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.220404  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.220410  706399 round_trippers.go:580]     Audit-Id: 6774ec4b-7426-4031-ac00-5f3c00310f09
	I1218 11:53:29.220415  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.220420  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.220426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.220433  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.220442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.220551  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"767","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I1218 11:53:29.220938  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.220954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.220961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.220967  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.222861  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.222877  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.222886  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.222897  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.222905  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.222913  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.222925  706399 round_trippers.go:580]     Audit-Id: cdc058a1-0407-4522-ad4e-1bccaa86b8e0
	I1218 11:53:29.222934  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.223090  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.223369  706399 pod_ready.go:97] node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223386  706399 pod_ready.go:81] duration metric: took 4.874816ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.223394  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "etcd-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.223412  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.223472  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:29.223479  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.223486  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.223496  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.225396  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.225413  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.225419  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.225425  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.225430  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.225435  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.225442  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.225451  706399 round_trippers.go:580]     Audit-Id: 2464f96a-0515-46f9-8313-633c8eafb3b2
	I1218 11:53:29.225634  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"768","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I1218 11:53:29.225978  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.225994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.226001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.226006  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.227849  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:29.227867  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.227876  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.227884  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.227892  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.227900  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.227909  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.227916  706399 round_trippers.go:580]     Audit-Id: 8723f00d-f528-46cc-b34b-878c1dbe29bf
	I1218 11:53:29.228105  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.228354  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228374  706399 pod_ready.go:81] duration metric: took 4.951319ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.228382  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-apiserver-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.228387  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.228468  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:29.228478  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.228484  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.228490  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.234141  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:29.234160  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.234169  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.234176  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.234190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.234195  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.234201  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.234205  706399 round_trippers.go:580]     Audit-Id: e7a7e09c-4d05-4a64-917b-5e55b2c17b60
	I1218 11:53:29.234474  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"769","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I1218 11:53:29.342153  706399 request.go:629] Waited for 107.293593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342245  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:29.342251  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.342259  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.342265  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.345014  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.345032  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.345039  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.345044  706399 round_trippers.go:580]     Audit-Id: 931285de-8f53-4e79-b792-460f413e4aff
	I1218 11:53:29.345049  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.345054  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.345059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.345068  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.345238  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:29.345553  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345573  706399 pod_ready.go:81] duration metric: took 117.178912ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:29.345582  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-controller-manager-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:29.345593  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.542039  706399 request.go:629] Waited for 196.361004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542142  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:29.542147  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.542156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.542162  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.544982  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.545002  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.545009  706399 round_trippers.go:580]     Audit-Id: e1d63858-4541-4ccd-a4da-08fd054a97e6
	I1218 11:53:29.545017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.545025  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.545033  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.545042  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.545058  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.545244  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:29.741997  706399 request.go:629] Waited for 196.344122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742076  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:29.742082  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.742093  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.742117  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.744705  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.744733  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.744743  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.744751  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.744759  706399 round_trippers.go:580]     Audit-Id: eb4d544e-890a-4cf6-8b49-17e1c66fedd1
	I1218 11:53:29.744766  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.744775  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.744785  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.744985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:29.745330  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:29.745356  706399 pod_ready.go:81] duration metric: took 399.751355ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.745369  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:29.941544  706399 request.go:629] Waited for 196.09241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941631  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:29.941639  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:29.941653  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:29.941664  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:29.944619  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:29.944641  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:29.944649  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:29.944654  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:29.944659  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:29.944665  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:29 GMT
	I1218 11:53:29.944670  706399 round_trippers.go:580]     Audit-Id: 53295520-6dfc-40b0-aa42-f14c320fd991
	I1218 11:53:29.944675  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:29.945395  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:30.141176  706399 request.go:629] Waited for 195.305381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:30.141277  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.141288  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.141294  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.144266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.144293  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.144304  706399 round_trippers.go:580]     Audit-Id: db750d14-63e9-423b-9181-601ba7e56368
	I1218 11:53:30.144313  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.144321  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.144328  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.144335  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.144342  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.144508  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:30.144910  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:30.144938  706399 pod_ready.go:81] duration metric: took 399.556805ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.144951  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.341891  706399 request.go:629] Waited for 196.832639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341974  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:30.341981  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.341989  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.341996  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.344936  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.344960  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.344969  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.344976  706399 round_trippers.go:580]     Audit-Id: 6d7e686b-0932-465f-b25e-09aeb30d81ad
	I1218 11:53:30.344983  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.344990  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.344998  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.345005  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.345247  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"772","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5932 chars]
	I1218 11:53:30.542107  706399 request.go:629] Waited for 196.385627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542172  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.542176  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.542202  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.542210  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.545091  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.545113  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.545121  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.545130  706399 round_trippers.go:580]     Audit-Id: c21950d0-952e-42f1-995c-f068b90f04c0
	I1218 11:53:30.545138  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.545145  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.545153  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.545164  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.545578  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.545899  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545916  706399 pod_ready.go:81] duration metric: took 400.958711ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.545925  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-proxy-jf8kx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.545935  706399 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:30.741971  706399 request.go:629] Waited for 195.944564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742047  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:30.742052  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.742062  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.742069  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.745047  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:30.745075  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.745084  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.745092  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.745105  706399 round_trippers.go:580]     Audit-Id: 588c2353-9d7d-488b-a950-87bf03ba3da0
	I1218 11:53:30.745115  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.745122  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.745130  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.745381  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"770","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I1218 11:53:30.941089  706399 request.go:629] Waited for 195.312312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941185  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:30.941199  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.941210  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.941216  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.944408  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.944434  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.944445  706399 round_trippers.go:580]     Audit-Id: 7a4702e0-308a-4d75-b115-eb14716b6830
	I1218 11:53:30.944453  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.944462  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.944474  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.944486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.944497  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.944675  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:30.945060  706399 pod_ready.go:97] node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945088  706399 pod_ready.go:81] duration metric: took 399.145466ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	E1218 11:53:30.945102  706399 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-107476" hosting pod "kube-scheduler-multinode-107476" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-107476" has status "Ready":"False"
	I1218 11:53:30.945113  706399 pod_ready.go:38] duration metric: took 1.740603836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:30.945134  706399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1218 11:53:30.975551  706399 command_runner.go:130] > -16
	I1218 11:53:30.975760  706399 ops.go:34] apiserver oom_adj: -16
	I1218 11:53:30.975788  706399 kubeadm.go:640] restartCluster took 21.903211868s
	I1218 11:53:30.975799  706399 kubeadm.go:406] StartCluster complete in 21.931036061s
	I1218 11:53:30.975823  706399 settings.go:142] acquiring lock: {Name:mk1b55e0e8c256c6bc60d3bea159645d01ed78f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.975910  706399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.976662  706399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/kubeconfig: {Name:mkbe3b47b918311ed7d778fc321c77660f5f2482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:53:30.976915  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1218 11:53:30.976953  706399 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1218 11:53:30.980045  706399 out.go:177] * Enabled addons: 
	I1218 11:53:30.977197  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:30.977270  706399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:53:30.981684  706399 addons.go:502] enable addons completed in 4.7055ms: enabled=[]
	I1218 11:53:30.982005  706399 kapi.go:59] client config for multinode-107476: &rest.Config{Host:"https://192.168.39.124:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.crt", KeyFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/client.key", CAFile:"/home/jenkins/minikube-integration/17824-683489/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c1ed00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1218 11:53:30.982452  706399 round_trippers.go:463] GET https://192.168.39.124:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1218 11:53:30.982466  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:30.982478  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:30.982487  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:30.985560  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:30.985590  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:30.985598  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:30 GMT
	I1218 11:53:30.985604  706399 round_trippers.go:580]     Audit-Id: 733c0867-ba1a-4681-b566-8abcfe50d689
	I1218 11:53:30.985613  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:30.985627  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:30.985638  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:30.985644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:30.985652  706399 round_trippers.go:580]     Content-Length: 291
	I1218 11:53:30.985680  706399 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"3f9d4717-a78b-4c7e-9f95-6ab3b5581a7f","resourceVersion":"778","creationTimestamp":"2023-12-18T11:49:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1218 11:53:30.985863  706399 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-107476" context rescaled to 1 replicas
	I1218 11:53:30.985895  706399 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1218 11:53:30.987695  706399 out.go:177] * Verifying Kubernetes components...
	I1218 11:53:30.989853  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:31.158210  706399 command_runner.go:130] > apiVersion: v1
	I1218 11:53:31.158232  706399 command_runner.go:130] > data:
	I1218 11:53:31.158237  706399 command_runner.go:130] >   Corefile: |
	I1218 11:53:31.158243  706399 command_runner.go:130] >     .:53 {
	I1218 11:53:31.158250  706399 command_runner.go:130] >         log
	I1218 11:53:31.158263  706399 command_runner.go:130] >         errors
	I1218 11:53:31.158271  706399 command_runner.go:130] >         health {
	I1218 11:53:31.158287  706399 command_runner.go:130] >            lameduck 5s
	I1218 11:53:31.158292  706399 command_runner.go:130] >         }
	I1218 11:53:31.158300  706399 command_runner.go:130] >         ready
	I1218 11:53:31.158309  706399 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1218 11:53:31.158313  706399 command_runner.go:130] >            pods insecure
	I1218 11:53:31.158325  706399 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1218 11:53:31.158335  706399 command_runner.go:130] >            ttl 30
	I1218 11:53:31.158342  706399 command_runner.go:130] >         }
	I1218 11:53:31.158352  706399 command_runner.go:130] >         prometheus :9153
	I1218 11:53:31.158360  706399 command_runner.go:130] >         hosts {
	I1218 11:53:31.158374  706399 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1218 11:53:31.158384  706399 command_runner.go:130] >            fallthrough
	I1218 11:53:31.158390  706399 command_runner.go:130] >         }
	I1218 11:53:31.158397  706399 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1218 11:53:31.158404  706399 command_runner.go:130] >            max_concurrent 1000
	I1218 11:53:31.158411  706399 command_runner.go:130] >         }
	I1218 11:53:31.158418  706399 command_runner.go:130] >         cache 30
	I1218 11:53:31.158434  706399 command_runner.go:130] >         loop
	I1218 11:53:31.158444  706399 command_runner.go:130] >         reload
	I1218 11:53:31.158453  706399 command_runner.go:130] >         loadbalance
	I1218 11:53:31.158462  706399 command_runner.go:130] >     }
	I1218 11:53:31.158472  706399 command_runner.go:130] > kind: ConfigMap
	I1218 11:53:31.158481  706399 command_runner.go:130] > metadata:
	I1218 11:53:31.158488  706399 command_runner.go:130] >   creationTimestamp: "2023-12-18T11:49:16Z"
	I1218 11:53:31.158492  706399 command_runner.go:130] >   name: coredns
	I1218 11:53:31.158498  706399 command_runner.go:130] >   namespace: kube-system
	I1218 11:53:31.158509  706399 command_runner.go:130] >   resourceVersion: "396"
	I1218 11:53:31.158517  706399 command_runner.go:130] >   uid: 9e09d417-7d67-4099-aeea-880a5f122cec
	I1218 11:53:31.161286  706399 node_ready.go:35] waiting up to 6m0s for node "multinode-107476" to be "Ready" ...
	I1218 11:53:31.161454  706399 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1218 11:53:31.161506  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.161526  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.161538  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.161551  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.164076  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:31.164092  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.164099  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.164104  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.164109  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.164114  706399 round_trippers.go:580]     Audit-Id: a1e2309c-5203-41c1-bdff-38bf4aa1b0e4
	I1218 11:53:31.164119  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.164124  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.164299  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:31.661958  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:31.661994  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:31.662005  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:31.662014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:31.665299  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:31.665326  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:31.665337  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:31.665345  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:31 GMT
	I1218 11:53:31.665354  706399 round_trippers.go:580]     Audit-Id: 715c021b-232b-46db-b224-0ee0e1d87bd0
	I1218 11:53:31.665364  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:31.665372  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:31.665383  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:31.665557  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.162261  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.162294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.162318  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.162328  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.165415  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:32.165445  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.165456  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.165465  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.165473  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.165480  706399 round_trippers.go:580]     Audit-Id: 8178c82f-f5df-4946-829a-8d607bef70f1
	I1218 11:53:32.165487  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.165494  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.165662  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:32.662421  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:32.662459  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:32.662472  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:32.662482  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:32.665000  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:32.665024  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:32.665031  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:32.665036  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:32.665044  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:32.665050  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:32.665055  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:32 GMT
	I1218 11:53:32.665063  706399 round_trippers.go:580]     Audit-Id: acbd11c7-43ce-4b9c-970b-6cfe7595d19b
	I1218 11:53:32.665272  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.161915  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.161951  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.161964  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.161973  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.164679  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.164707  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.164718  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.164727  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.164734  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.164742  706399 round_trippers.go:580]     Audit-Id: 803dab92-ad10-4d1e-9c2c-02e13845c977
	I1218 11:53:33.164754  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.164761  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.164950  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"765","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1218 11:53:33.165364  706399 node_ready.go:58] node "multinode-107476" has status "Ready":"False"
	I1218 11:53:33.661704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.661729  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.661737  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.661743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.664502  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.664528  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.664537  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.664542  706399 round_trippers.go:580]     Audit-Id: 78f37d85-d255-498a-97ae-7e7ffea71734
	I1218 11:53:33.664547  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.664552  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.664558  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.664563  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.664871  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:33.665288  706399 node_ready.go:49] node "multinode-107476" has status "Ready":"True"
	I1218 11:53:33.665314  706399 node_ready.go:38] duration metric: took 2.503992718s waiting for node "multinode-107476" to be "Ready" ...
	I1218 11:53:33.665324  706399 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:33.665384  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:33.665393  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.665400  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.665406  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.668975  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:33.668992  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.668998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.669004  706399 round_trippers.go:580]     Audit-Id: 616ca18a-8e53-464b-b8f7-fdc3a26f56e2
	I1218 11:53:33.669011  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.669016  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.669021  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.669026  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.670356  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"852"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83732 chars]
	I1218 11:53:33.672899  706399 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:33.672977  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:33.672986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.672993  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.672999  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.675712  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:33.675728  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.675743  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.675748  706399 round_trippers.go:580]     Audit-Id: eabe770a-6bc4-4dfc-b039-991ddbcade34
	I1218 11:53:33.675755  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.675760  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.675765  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.675771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.676383  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:33.676975  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:33.676993  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:33.677001  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:33.677007  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:33.678858  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:33.678876  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:33.678885  706399 round_trippers.go:580]     Audit-Id: f39381a7-3505-48c5-8706-62a66b7c6d74
	I1218 11:53:33.678898  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:33.678907  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:33.678913  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:33.678918  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:33.678926  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:33 GMT
	I1218 11:53:33.679219  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.173545  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.173574  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.173582  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.173588  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.177792  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:34.177814  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.177821  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.177827  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.177832  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.177837  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.177842  706399 round_trippers.go:580]     Audit-Id: e0c08780-2ccb-4466-ac60-0130be0e91bb
	I1218 11:53:34.177847  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.178197  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.178858  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.178877  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.178888  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.178898  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.182714  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.182734  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.182741  706399 round_trippers.go:580]     Audit-Id: 8aad3fb3-c28c-4741-bb51-1b599fc4d9a2
	I1218 11:53:34.182746  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.182751  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.182756  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.182761  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.182766  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.183249  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:34.674054  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:34.674087  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.674102  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.674111  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.677143  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:34.677168  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.677175  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.677181  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.677191  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.677196  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.677201  706399 round_trippers.go:580]     Audit-Id: 705cd8df-0cf3-47cc-9898-d4f3cbf27fc1
	I1218 11:53:34.677206  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.677480  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:34.677955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:34.677969  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:34.677977  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:34.677983  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:34.680928  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:34.680951  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:34.680961  706399 round_trippers.go:580]     Audit-Id: 79e80102-7689-456c-968e-8b545873dcf0
	I1218 11:53:34.680969  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:34.680979  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:34.680992  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:34.681003  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:34.681011  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:34 GMT
	I1218 11:53:34.681532  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.173215  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.173248  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.173257  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.173309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.176153  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.176175  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.176183  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.176190  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.176199  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.176207  706399 round_trippers.go:580]     Audit-Id: 6b84f91f-f0e3-431d-b790-7a72f221660b
	I1218 11:53:35.176218  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.176227  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.176689  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.177270  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.177287  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.177295  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.177303  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.179670  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.179698  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.179705  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.179712  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.179720  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.179728  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.179735  706399 round_trippers.go:580]     Audit-Id: 4ebee61b-cc3b-47df-a387-697134152b33
	I1218 11:53:35.179744  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.179923  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.673560  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:35.673590  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.673599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.673605  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.676855  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:35.676885  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.676895  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.676903  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.676910  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.676917  706399 round_trippers.go:580]     Audit-Id: 29332715-2ca7-46d2-9eae-60bcc11a611d
	I1218 11:53:35.676923  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.676931  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.677062  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:35.677571  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:35.677588  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:35.677599  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:35.677610  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:35.680478  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:35.680509  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:35.680519  706399 round_trippers.go:580]     Audit-Id: cf7fec49-5746-4ad8-ad95-44ddd5a46a7c
	I1218 11:53:35.680528  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:35.680537  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:35.680545  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:35.680552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:35.680560  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:35 GMT
	I1218 11:53:35.680765  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:35.681145  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:36.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.173440  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.173448  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.179050  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:36.179081  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.179092  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.179127  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.179141  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.179149  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.179161  706399 round_trippers.go:580]     Audit-Id: 2262febf-a9c1-4185-a064-37f0e57229fd
	I1218 11:53:36.179173  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.179914  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.180600  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.180626  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.180638  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.180648  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.182832  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.182851  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.182859  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.182867  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.182874  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.182881  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.182890  706399 round_trippers.go:580]     Audit-Id: 7672883b-ce34-4c88-940d-e431e9489d5d
	I1218 11:53:36.182900  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.183021  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:36.673765  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:36.673797  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.673809  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.673816  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.676897  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:36.676920  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.676941  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.676948  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.676956  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.676963  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.676971  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.676985  706399 round_trippers.go:580]     Audit-Id: 4ce118c9-c9cf-42f4-ad28-24e77a8f8d0b
	I1218 11:53:36.677587  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:36.678050  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:36.678064  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:36.678073  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:36.678079  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:36.680488  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:36.680504  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:36.680513  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:36.680520  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:36.680528  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:36.680542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:36.680558  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:36 GMT
	I1218 11:53:36.680567  706399 round_trippers.go:580]     Audit-Id: 329a463d-e9eb-4a48-941f-81cfd668cb20
	I1218 11:53:36.680745  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.173387  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.173415  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.173423  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.173430  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.176760  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.176789  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.176799  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.176807  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.176814  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.176822  706399 round_trippers.go:580]     Audit-Id: ccd32a2b-22a0-4e80-891a-798ae2e74751
	I1218 11:53:37.176830  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.176841  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.177566  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.178053  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.178066  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.178074  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.178080  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.180584  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.180606  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.180616  706399 round_trippers.go:580]     Audit-Id: 6b24dd36-f6bb-4b1e-bc13-bfdc9fcb3deb
	I1218 11:53:37.180624  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.180634  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.180644  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.180660  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.180673  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.181042  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:37.673822  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:37.673855  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.673864  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.673870  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.676905  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:37.676930  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.676937  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.676943  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.676948  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.676953  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.676958  706399 round_trippers.go:580]     Audit-Id: c843db6c-febe-472d-9c6d-2c60ae326f9c
	I1218 11:53:37.676963  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.677455  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:37.677995  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:37.678010  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:37.678018  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:37.678024  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:37.680442  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:37.680462  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:37.680471  706399 round_trippers.go:580]     Audit-Id: 03d35d8e-3248-4e4a-aaa4-561ea5506445
	I1218 11:53:37.680479  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:37.680486  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:37.680494  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:37.680506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:37.680514  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:37 GMT
	I1218 11:53:37.680764  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.173464  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.173495  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.173504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.173510  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.177182  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:38.177207  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.177217  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.177225  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.177231  706399 round_trippers.go:580]     Audit-Id: 8f5f4bf7-c666-4e33-9c29-fb899337e95e
	I1218 11:53:38.177238  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.177245  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.177252  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.177919  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.178418  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.178436  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.178444  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.178449  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.181432  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.181453  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.181463  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.181472  706399 round_trippers.go:580]     Audit-Id: cdef1a0d-7934-417d-b867-e54c5da5c288
	I1218 11:53:38.181480  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.181488  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.181497  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.181506  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.182567  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:38.182937  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:38.673981  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:38.674003  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.674014  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.674021  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.676858  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.676938  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.676957  706399 round_trippers.go:580]     Audit-Id: 2ef42698-8375-41e0-83e7-e39f4386e551
	I1218 11:53:38.676967  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.676976  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.676982  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.676987  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.677194  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:38.677739  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:38.677756  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:38.677766  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:38.677775  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:38.680079  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:38.680104  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:38.680114  706399 round_trippers.go:580]     Audit-Id: cb04d605-5990-411c-bb61-d27a16eb40e0
	I1218 11:53:38.680122  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:38.680127  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:38.680132  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:38.680137  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:38.680142  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:38 GMT
	I1218 11:53:38.680303  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.173689  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.173724  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.173735  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.173743  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.176928  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:39.176956  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.176966  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.176974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.176991  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.176998  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.177009  706399 round_trippers.go:580]     Audit-Id: 53d7f113-e0ab-4396-97c8-fac771a70baa
	I1218 11:53:39.177017  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.177158  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.177635  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.177666  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.177677  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.177687  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.180115  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.180141  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.180152  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.180160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.180166  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.180174  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.180179  706399 round_trippers.go:580]     Audit-Id: 90d34db6-74ca-42f5-81d9-8222532758aa
	I1218 11:53:39.180196  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.180432  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:39.674135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:39.674165  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.674176  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.674185  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.676939  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.676965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.676974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.676990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.676995  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.677000  706399 round_trippers.go:580]     Audit-Id: 501a9bb0-a4f9-46a1-b970-b27f1660227c
	I1218 11:53:39.677005  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.677011  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.677211  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:39.677746  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:39.677765  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:39.677776  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:39.677784  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:39.680008  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:39.680025  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:39.680032  706399 round_trippers.go:580]     Audit-Id: 649faefe-95f3-4ba5-944c-2b3ac4a04840
	I1218 11:53:39.680037  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:39.680042  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:39.680047  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:39.680059  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:39.680064  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:39 GMT
	I1218 11:53:39.680517  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.173280  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.173318  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.173330  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.173338  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.176226  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.176252  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.176260  706399 round_trippers.go:580]     Audit-Id: dcaceef1-cb4c-409d-9795-82135569a3f0
	I1218 11:53:40.176265  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.176271  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.176276  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.176281  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.176286  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.176500  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.177135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.177154  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.177166  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.177173  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.179445  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.179459  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.179466  706399 round_trippers.go:580]     Audit-Id: 9033ba3c-1dd2-4b09-8d85-34017bc0e26d
	I1218 11:53:40.179471  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.179476  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.179480  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.179486  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.179491  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.179900  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.673585  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:40.673616  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.673624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.673630  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.676460  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.676486  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.676496  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.676505  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.676513  706399 round_trippers.go:580]     Audit-Id: 0ac8a2dd-4ed5-431e-9228-2726aad2faf3
	I1218 11:53:40.676522  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.676532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.676542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.676681  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:40.677282  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:40.677299  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:40.677309  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:40.677322  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:40.679390  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:40.679405  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:40.679411  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:40.679417  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:40.679422  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:40.679426  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:40.679431  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:40 GMT
	I1218 11:53:40.679437  706399 round_trippers.go:580]     Audit-Id: 3088a618-2697-41b5-b81f-673ab861df2d
	I1218 11:53:40.679674  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:40.680074  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:41.173403  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.173429  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.173438  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.173443  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.176266  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.176289  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.176300  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.176315  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.176322  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.176336  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.176349  706399 round_trippers.go:580]     Audit-Id: 53e21024-a9d5-4eca-a522-b1244059f300
	I1218 11:53:41.176356  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.177028  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.177537  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.177553  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.177561  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.177570  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.179482  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:41.179501  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.179523  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.179532  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.179542  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.179552  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.179564  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.179575  706399 round_trippers.go:580]     Audit-Id: 237d0bff-9402-489c-822c-431b43baeb0c
	I1218 11:53:41.179806  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:41.673434  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:41.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.673475  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.673481  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.676679  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:41.676701  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.676709  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.676715  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.676720  706399 round_trippers.go:580]     Audit-Id: 5e3f13c1-c640-47db-98ab-31b91f950abc
	I1218 11:53:41.676725  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.676731  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.676736  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.677002  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:41.677473  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:41.677493  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:41.677504  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:41.677512  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:41.679823  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:41.679840  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:41.679847  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:41.679852  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:41.679857  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:41.679862  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:41 GMT
	I1218 11:53:41.679867  706399 round_trippers.go:580]     Audit-Id: c58e98cd-5718-47f9-b671-de3e227e7f8a
	I1218 11:53:41.679880  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:41.680038  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.173754  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.173792  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.173801  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.173807  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.176269  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.176291  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.176307  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.176315  706399 round_trippers.go:580]     Audit-Id: f068f51e-93e6-4b4b-8a24-382d1325b363
	I1218 11:53:42.176324  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.176333  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.176343  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.176352  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.176513  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.176990  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.177006  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.177016  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.177025  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.179154  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.179173  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.179184  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.179193  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.179200  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.179208  706399 round_trippers.go:580]     Audit-Id: 6bd1c020-71a6-4a7c-b496-e507683b71a1
	I1218 11:53:42.179214  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.179219  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.179368  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.674178  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:42.674211  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.674219  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.674225  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.676989  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.677019  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.677030  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.677039  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.677048  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.677057  706399 round_trippers.go:580]     Audit-Id: e7f3a1b6-10ed-4499-9e1f-e736dfc275de
	I1218 11:53:42.677069  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.677077  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.677226  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:42.677701  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:42.677715  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:42.677722  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:42.677728  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:42.679919  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:42.679944  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:42.679952  706399 round_trippers.go:580]     Audit-Id: b58dd22e-a294-44fd-a21e-73d9d8edf70c
	I1218 11:53:42.679958  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:42.679963  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:42.679968  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:42.679974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:42.679979  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:42 GMT
	I1218 11:53:42.680228  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:42.680665  706399 pod_ready.go:102] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"False"
	I1218 11:53:43.173955  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.173986  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.173994  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.174000  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.179521  706399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1218 11:53:43.179550  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.179561  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.179571  706399 round_trippers.go:580]     Audit-Id: 361dfd5f-b3d7-4aee-a744-f1e5be8299ab
	I1218 11:53:43.179579  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.179587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.179597  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.179605  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.179840  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.180347  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.180364  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.180371  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.180377  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.182529  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:43.182552  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.182562  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.182571  706399 round_trippers.go:580]     Audit-Id: c293cdc2-7c87-4e68-b2af-879cb905970f
	I1218 11:53:43.182578  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.182587  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.182594  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.182602  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.182772  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:43.673323  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:43.673355  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.673366  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.673375  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.676722  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.676752  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.676762  706399 round_trippers.go:580]     Audit-Id: 94473599-4289-4510-bb2d-43ba24b179f0
	I1218 11:53:43.676770  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.676778  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.676804  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.676819  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.676832  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.677037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:43.677593  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:43.677612  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:43.677624  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:43.677633  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:43.680695  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:43.680718  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:43.680727  706399 round_trippers.go:580]     Audit-Id: bdae868b-fd96-4f89-9ccb-5dce584f6e62
	I1218 11:53:43.680737  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:43.680745  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:43.680753  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:43.680770  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:43.680778  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:43 GMT
	I1218 11:53:43.681643  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.173868  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.173892  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.173900  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.173907  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.185903  706399 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1218 11:53:44.185939  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.185949  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.185957  706399 round_trippers.go:580]     Audit-Id: 0807c889-4f55-447d-909a-ec577df47c9f
	I1218 11:53:44.185964  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.185973  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.185981  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.185990  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.186217  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"773","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1218 11:53:44.186803  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.186821  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.186829  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.186835  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.189463  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.189484  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.189494  706399 round_trippers.go:580]     Audit-Id: d76f03a9-c756-48da-8594-aa7191476ce1
	I1218 11:53:44.189502  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.189510  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.189519  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.189527  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.189536  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.189666  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.673257  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-nl8xc
	I1218 11:53:44.673294  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.673303  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.673309  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.678016  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.678037  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.678044  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.678061  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.678066  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.678071  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.678076  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.678082  706399 round_trippers.go:580]     Audit-Id: ded65a70-0ef7-468a-8c23-d3584306f5ce
	I1218 11:53:44.678372  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1218 11:53:44.678912  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.678929  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.678936  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.678943  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.683034  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:44.683059  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.683068  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.683076  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.683085  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.683103  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.683116  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.683124  706399 round_trippers.go:580]     Audit-Id: dee3a488-a4c0-429c-a3d0-763057e3e6fa
	I1218 11:53:44.683810  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.684155  706399 pod_ready.go:92] pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.684175  706399 pod_ready.go:81] duration metric: took 11.01125188s waiting for pod "coredns-5dd5756b68-nl8xc" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684185  706399 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.684251  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-107476
	I1218 11:53:44.684260  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.684267  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.684273  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.686236  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:44.686257  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.686282  706399 round_trippers.go:580]     Audit-Id: 57a8ca26-0ed4-4f32-a864-04c5cde44f00
	I1218 11:53:44.686294  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.686304  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.686317  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.686324  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.686334  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.686465  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-107476","namespace":"kube-system","uid":"57bcfe21-f4da-4bcf-bb4e-385b695e1e0f","resourceVersion":"860","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.124:2379","kubernetes.io/config.hash":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.mirror":"0580320334260bd56968136e3903eaf1","kubernetes.io/config.seen":"2023-12-18T11:49:16.607301032Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1218 11:53:44.686943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.686962  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.686969  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.686975  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.689166  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.689180  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.689186  706399 round_trippers.go:580]     Audit-Id: cf29250f-3957-4111-b39c-e51f822d2956
	I1218 11:53:44.689192  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.689196  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.689201  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.689206  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.689214  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.689316  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.689596  706399 pod_ready.go:92] pod "etcd-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.689612  706399 pod_ready.go:81] duration metric: took 5.418084ms waiting for pod "etcd-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689626  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.689687  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-107476
	I1218 11:53:44.689696  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.689702  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.689708  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.692944  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:44.692965  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.692974  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.692983  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.692991  706399 round_trippers.go:580]     Audit-Id: 6c7bd0e3-c0dc-4d2f-8958-13828542872b
	I1218 11:53:44.692999  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.693007  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.693017  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.693306  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-107476","namespace":"kube-system","uid":"ed1a5fb5-539a-4a7d-9977-42e1392858fb","resourceVersion":"856","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.124:8443","kubernetes.io/config.hash":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.mirror":"d249aa06177557dc7c27cc4c9fd3f8c4","kubernetes.io/config.seen":"2023-12-18T11:49:16.607305528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1218 11:53:44.693815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.693830  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.693837  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.693842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.696806  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.696825  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.696832  706399 round_trippers.go:580]     Audit-Id: 0bdd9f51-a776-465a-8a9e-1430d9ca51e2
	I1218 11:53:44.696837  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.696842  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.696846  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.696851  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.696856  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.697133  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.697438  706399 pod_ready.go:92] pod "kube-apiserver-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.697454  706399 pod_ready.go:81] duration metric: took 7.821649ms waiting for pod "kube-apiserver-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697463  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.697538  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-107476
	I1218 11:53:44.697551  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.697563  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.697579  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.700370  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.700389  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.700399  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.700408  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.700415  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.700424  706399 round_trippers.go:580]     Audit-Id: a8f89c02-db62-4dfd-aeec-c6d8bec7c55d
	I1218 11:53:44.700432  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.700440  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.702801  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-107476","namespace":"kube-system","uid":"9b1fc3f6-07ef-4577-9135-a1c4844e5555","resourceVersion":"851","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.mirror":"00c351f167ca4a8342aa8125cafbf1ad","kubernetes.io/config.seen":"2023-12-18T11:49:16.607306981Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1218 11:53:44.703704  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:44.703722  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.703731  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.703740  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.706249  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.706267  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.706274  706399 round_trippers.go:580]     Audit-Id: 7ca2ece9-43e8-49c0-b944-aa148d24246d
	I1218 11:53:44.706279  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.706284  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.706289  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.706295  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.706308  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.706518  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:44.706859  706399 pod_ready.go:92] pod "kube-controller-manager-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.706874  706399 pod_ready.go:81] duration metric: took 9.405069ms waiting for pod "kube-controller-manager-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706885  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.706943  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xwh7
	I1218 11:53:44.706954  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.706961  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.706969  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.709895  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.709910  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.709916  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.709921  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.709926  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.709931  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.709936  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.709941  706399 round_trippers.go:580]     Audit-Id: 73ccb16b-4b09-4e96-9ff3-b6875d4dcebf
	I1218 11:53:44.710221  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9xwh7","generateName":"kube-proxy-","namespace":"kube-system","uid":"d1b02596-ab29-4f7a-8118-bd091eef9e44","resourceVersion":"520","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5545 chars]
	I1218 11:53:44.710653  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m02
	I1218 11:53:44.710668  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.710679  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.710689  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.713326  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.713340  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.713347  706399 round_trippers.go:580]     Audit-Id: 51b3f6a6-746d-4c41-89de-3e3d10f2ac93
	I1218 11:53:44.713367  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.713375  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.713380  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.713385  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.713396  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.713985  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m02","uid":"aac92642-4fcf-4fbe-89f6-b1c274d602fe","resourceVersion":"737","creationTimestamp":"2023-12-18T11:50:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:50:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3819 chars]
	I1218 11:53:44.714201  706399 pod_ready.go:92] pod "kube-proxy-9xwh7" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:44.714214  706399 pod_ready.go:81] duration metric: took 7.323276ms waiting for pod "kube-proxy-9xwh7" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.714224  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:44.873723  706399 request.go:629] Waited for 159.413698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873815  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ff4bs
	I1218 11:53:44.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:44.873835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:44.873846  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:44.876813  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:44.876855  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:44.876866  706399 round_trippers.go:580]     Audit-Id: 80d37f73-516f-4df0-a715-29b05d26f212
	I1218 11:53:44.876872  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:44.876878  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:44.876883  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:44.876888  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:44.876895  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:44 GMT
	I1218 11:53:44.877037  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ff4bs","generateName":"kube-proxy-","namespace":"kube-system","uid":"a5e9af15-7c15-4de8-8be0-1b8e7289125f","resourceVersion":"746","creationTimestamp":"2023-12-18T11:51:17Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:51:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1218 11:53:45.074061  706399 request.go:629] Waited for 196.407368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074135  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476-m03
	I1218 11:53:45.074141  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.074148  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.074154  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.076973  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.077001  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.077013  706399 round_trippers.go:580]     Audit-Id: 0150f682-9003-42e6-95c5-4a92f0ba4920
	I1218 11:53:45.077022  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.077031  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.077040  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.077046  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.077051  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.077151  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476-m03","uid":"18274b06-f1b8-4878-9e6b-e3745fba73a7","resourceVersion":"759","creationTimestamp":"2023-12-18T11:52:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2023_12_18T11_52_06_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:52:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3635 chars]
	I1218 11:53:45.077554  706399 pod_ready.go:92] pod "kube-proxy-ff4bs" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.077579  706399 pod_ready.go:81] duration metric: took 363.348514ms waiting for pod "kube-proxy-ff4bs" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.077591  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.273746  706399 request.go:629] Waited for 196.06681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273821  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jf8kx
	I1218 11:53:45.273827  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.273835  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.273842  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.276787  706399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1218 11:53:45.276809  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.276816  706399 round_trippers.go:580]     Audit-Id: a6700efe-44c9-4e0b-ab8b-4cceb94a69cc
	I1218 11:53:45.276825  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.276834  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.276842  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.276850  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.276859  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.277036  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jf8kx","generateName":"kube-proxy-","namespace":"kube-system","uid":"060b1020-573b-4b35-9a0b-e04f37535267","resourceVersion":"782","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0e72fcc9-1564-4bdd-b4f8-62b22413c21c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0e72fcc9-1564-4bdd-b4f8-62b22413c21c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5742 chars]
	I1218 11:53:45.474033  706399 request.go:629] Waited for 196.438047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474131  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.474142  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.474156  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.474169  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.477824  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.477853  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.477864  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.477873  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.477880  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.477889  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.477897  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.477909  706399 round_trippers.go:580]     Audit-Id: c36a7602-f8f6-447c-85d1-76254cd38665
	I1218 11:53:45.478069  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.478494  706399 pod_ready.go:92] pod "kube-proxy-jf8kx" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.478515  706399 pod_ready.go:81] duration metric: took 400.917905ms waiting for pod "kube-proxy-jf8kx" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.478525  706399 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.673370  706399 request.go:629] Waited for 194.759725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673457  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-107476
	I1218 11:53:45.673463  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.673471  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.673480  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.677105  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.677128  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.677137  706399 round_trippers.go:580]     Audit-Id: 5a30c9ba-0617-498f-83e0-396ac7b0a17b
	I1218 11:53:45.677145  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.677153  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.677160  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.677167  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.677180  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.677824  706399 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-107476","namespace":"kube-system","uid":"08f65d94-d942-4ae5-a937-e3efff4b51dd","resourceVersion":"862","creationTimestamp":"2023-12-18T11:49:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.mirror":"47de9e5e3d9b879716556f063f68cd22","kubernetes.io/config.seen":"2023-12-18T11:49:16.607308314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1218 11:53:45.873712  706399 request.go:629] Waited for 195.397858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873812  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes/multinode-107476
	I1218 11:53:45.873823  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.873831  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.873837  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.876889  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:45.876911  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.876918  706399 round_trippers.go:580]     Audit-Id: bfe725d9-9c70-4dc7-bd45-d55e484f467a
	I1218 11:53:45.876924  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.876928  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.876933  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.876938  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.876943  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.877172  706399 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-12-18T11:49:13Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1218 11:53:45.877490  706399 pod_ready.go:92] pod "kube-scheduler-multinode-107476" in "kube-system" namespace has status "Ready":"True"
	I1218 11:53:45.877504  706399 pod_ready.go:81] duration metric: took 398.969668ms waiting for pod "kube-scheduler-multinode-107476" in "kube-system" namespace to be "Ready" ...
	I1218 11:53:45.877517  706399 pod_ready.go:38] duration metric: took 12.212180593s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1218 11:53:45.877535  706399 api_server.go:52] waiting for apiserver process to appear ...
	I1218 11:53:45.877585  706399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:53:45.893465  706399 command_runner.go:130] > 1729
	I1218 11:53:45.893561  706399 api_server.go:72] duration metric: took 14.907630232s to wait for apiserver process to appear ...
	I1218 11:53:45.893577  706399 api_server.go:88] waiting for apiserver healthz status ...
	I1218 11:53:45.893601  706399 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:53:45.899790  706399 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:53:45.899867  706399 round_trippers.go:463] GET https://192.168.39.124:8443/version
	I1218 11:53:45.899873  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:45.899881  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:45.899887  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:45.901094  706399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1218 11:53:45.901120  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:45.901128  706399 round_trippers.go:580]     Audit-Id: 7b85e82b-ec64-4584-8946-326f560ec5fc
	I1218 11:53:45.901134  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:45.901139  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:45.901145  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:45.901150  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:45.901156  706399 round_trippers.go:580]     Content-Length: 264
	I1218 11:53:45.901164  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:45 GMT
	I1218 11:53:45.901186  706399 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1218 11:53:45.901243  706399 api_server.go:141] control plane version: v1.28.4
	I1218 11:53:45.901259  706399 api_server.go:131] duration metric: took 7.675448ms to wait for apiserver health ...
	I1218 11:53:45.901267  706399 system_pods.go:43] waiting for kube-system pods to appear ...
	I1218 11:53:46.073761  706399 request.go:629] Waited for 172.377393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073824  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.073837  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.073845  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.073851  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.078255  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.078283  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.078291  706399 round_trippers.go:580]     Audit-Id: 8a8a3f91-2b40-4ed6-8673-2e9287ce0bf7
	I1218 11:53:46.078296  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.078302  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.078307  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.078312  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.078317  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.079532  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.083180  706399 system_pods.go:59] 12 kube-system pods found
	I1218 11:53:46.083218  706399 system_pods.go:61] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.083226  706399 system_pods.go:61] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.083231  706399 system_pods.go:61] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.083237  706399 system_pods.go:61] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.083242  706399 system_pods.go:61] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.083248  706399 system_pods.go:61] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.083263  706399 system_pods.go:61] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.083274  706399 system_pods.go:61] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.083283  706399 system_pods.go:61] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.083290  706399 system_pods.go:61] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.083299  706399 system_pods.go:61] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.083306  706399 system_pods.go:61] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.083317  706399 system_pods.go:74] duration metric: took 182.043479ms to wait for pod list to return data ...
	I1218 11:53:46.083328  706399 default_sa.go:34] waiting for default service account to be created ...
	I1218 11:53:46.273839  706399 request.go:629] Waited for 190.41018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273914  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/default/serviceaccounts
	I1218 11:53:46.273919  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.273928  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.273934  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.277176  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.277201  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.277209  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.277219  706399 round_trippers.go:580]     Content-Length: 261
	I1218 11:53:46.277227  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.277236  706399 round_trippers.go:580]     Audit-Id: 8fb527bf-40a9-449e-b359-393d44708047
	I1218 11:53:46.277245  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.277251  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.277260  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.277289  706399 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"891"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d939767d-22df-4871-b1e9-1f264cd78bb5","resourceVersion":"351","creationTimestamp":"2023-12-18T11:49:29Z"}}]}
	I1218 11:53:46.277563  706399 default_sa.go:45] found service account: "default"
	I1218 11:53:46.277611  706399 default_sa.go:55] duration metric: took 194.253503ms for default service account to be created ...
	I1218 11:53:46.277627  706399 system_pods.go:116] waiting for k8s-apps to be running ...
	I1218 11:53:46.474114  706399 request.go:629] Waited for 196.394547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474195  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/namespaces/kube-system/pods
	I1218 11:53:46.474203  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.474215  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.474228  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.478438  706399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1218 11:53:46.478468  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.478479  706399 round_trippers.go:580]     Audit-Id: bb45fe89-dded-417e-8392-f9b3d76b81f5
	I1218 11:53:46.478488  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.478496  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.478505  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.478512  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.478528  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.479114  706399 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"coredns-5dd5756b68-nl8xc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"17cd3c37-30e8-4d98-81f5-44f58135adf3","resourceVersion":"887","creationTimestamp":"2023-12-18T11:49:29Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"dafac34d-5bbe-41fe-9430-4bc1ce19de08","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-12-18T11:49:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dafac34d-5bbe-41fe-9430-4bc1ce19de08\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82968 chars]
	I1218 11:53:46.481559  706399 system_pods.go:86] 12 kube-system pods found
	I1218 11:53:46.481584  706399 system_pods.go:89] "coredns-5dd5756b68-nl8xc" [17cd3c37-30e8-4d98-81f5-44f58135adf3] Running
	I1218 11:53:46.481592  706399 system_pods.go:89] "etcd-multinode-107476" [57bcfe21-f4da-4bcf-bb4e-385b695e1e0f] Running
	I1218 11:53:46.481599  706399 system_pods.go:89] "kindnet-6wlkb" [1cf338b4-8a33-4e69-aa83-3cd29b041e08] Running
	I1218 11:53:46.481605  706399 system_pods.go:89] "kindnet-8hrhv" [ef739466-48d4-4fbd-8fa5-63a41e4c6833] Running
	I1218 11:53:46.481610  706399 system_pods.go:89] "kindnet-l9h8d" [0acf0fd4-5988-4545-828c-7cb6076a5b18] Running
	I1218 11:53:46.481619  706399 system_pods.go:89] "kube-apiserver-multinode-107476" [ed1a5fb5-539a-4a7d-9977-42e1392858fb] Running
	I1218 11:53:46.481627  706399 system_pods.go:89] "kube-controller-manager-multinode-107476" [9b1fc3f6-07ef-4577-9135-a1c4844e5555] Running
	I1218 11:53:46.481634  706399 system_pods.go:89] "kube-proxy-9xwh7" [d1b02596-ab29-4f7a-8118-bd091eef9e44] Running
	I1218 11:53:46.481643  706399 system_pods.go:89] "kube-proxy-ff4bs" [a5e9af15-7c15-4de8-8be0-1b8e7289125f] Running
	I1218 11:53:46.481651  706399 system_pods.go:89] "kube-proxy-jf8kx" [060b1020-573b-4b35-9a0b-e04f37535267] Running
	I1218 11:53:46.481658  706399 system_pods.go:89] "kube-scheduler-multinode-107476" [08f65d94-d942-4ae5-a937-e3efff4b51dd] Running
	I1218 11:53:46.481667  706399 system_pods.go:89] "storage-provisioner" [e04ec19d-39a8-4849-b604-8e46b7f9cea3] Running
	I1218 11:53:46.481677  706399 system_pods.go:126] duration metric: took 204.042426ms to wait for k8s-apps to be running ...
	I1218 11:53:46.481690  706399 system_svc.go:44] waiting for kubelet service to be running ....
	I1218 11:53:46.481747  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:53:46.496708  706399 system_svc.go:56] duration metric: took 15.008248ms WaitForService to wait for kubelet.
	I1218 11:53:46.496742  706399 kubeadm.go:581] duration metric: took 15.510812865s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1218 11:53:46.496766  706399 node_conditions.go:102] verifying NodePressure condition ...
	I1218 11:53:46.674277  706399 request.go:629] Waited for 177.41815ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674357  706399 round_trippers.go:463] GET https://192.168.39.124:8443/api/v1/nodes
	I1218 11:53:46.674362  706399 round_trippers.go:469] Request Headers:
	I1218 11:53:46.674418  706399 round_trippers.go:473]     Accept: application/json, */*
	I1218 11:53:46.674489  706399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1218 11:53:46.677744  706399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1218 11:53:46.677763  706399 round_trippers.go:577] Response Headers:
	I1218 11:53:46.677771  706399 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf954231-c37b-4a92-960a-a15d47fba7fd
	I1218 11:53:46.677777  706399 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b3372d83-c53e-4e30-8c41-c7ed4505e58e
	I1218 11:53:46.677783  706399 round_trippers.go:580]     Date: Mon, 18 Dec 2023 11:53:46 GMT
	I1218 11:53:46.677788  706399 round_trippers.go:580]     Audit-Id: 127b003d-0ea0-41a7-833f-6b9650904cf1
	I1218 11:53:46.677794  706399 round_trippers.go:580]     Cache-Control: no-cache, private
	I1218 11:53:46.677803  706399 round_trippers.go:580]     Content-Type: application/json
	I1218 11:53:46.678201  706399 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"893"},"items":[{"metadata":{"name":"multinode-107476","uid":"bab8cfff-8ac7-407a-aa43-2f7afae6a2b7","resourceVersion":"852","creationTimestamp":"2023-12-18T11:49:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-107476","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30d8ecd1811578f7b9db580c501c654c189f68d4","minikube.k8s.io/name":"multinode-107476","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_12_18T11_49_17_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 14648 chars]
	I1218 11:53:46.678828  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678850  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678863  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678867  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678872  706399 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1218 11:53:46.678875  706399 node_conditions.go:123] node cpu capacity is 2
	I1218 11:53:46.678879  706399 node_conditions.go:105] duration metric: took 182.108972ms to run NodePressure ...
	I1218 11:53:46.678892  706399 start.go:228] waiting for startup goroutines ...
	I1218 11:53:46.678901  706399 start.go:233] waiting for cluster config update ...
	I1218 11:53:46.678914  706399 start.go:242] writing updated cluster config ...
	I1218 11:53:46.679419  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:53:46.679525  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.683229  706399 out.go:177] * Starting worker node multinode-107476-m02 in cluster multinode-107476
	I1218 11:53:46.684696  706399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:53:46.684730  706399 cache.go:56] Caching tarball of preloaded images
	I1218 11:53:46.684832  706399 preload.go:174] Found /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1218 11:53:46.684846  706399 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1218 11:53:46.684979  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:53:46.685210  706399 start.go:365] acquiring machines lock for multinode-107476-m02: {Name:mkb0cc9fb73bf09f8db2889f035117cd52674d46 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1218 11:53:46.685261  706399 start.go:369] acquired machines lock for "multinode-107476-m02" in 28.185µs
	I1218 11:53:46.685282  706399 start.go:96] Skipping create...Using existing machine configuration
	I1218 11:53:46.685293  706399 fix.go:54] fixHost starting: m02
	I1218 11:53:46.685600  706399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:53:46.685626  706399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:53:46.700004  706399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I1218 11:53:46.700443  706399 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:53:46.700912  706399 main.go:141] libmachine: Using API Version  1
	I1218 11:53:46.700933  706399 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:53:46.701277  706399 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:53:46.701452  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:53:46.701622  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:53:46.703098  706399 fix.go:102] recreateIfNeeded on multinode-107476-m02: state=Stopped err=<nil>
	I1218 11:53:46.703120  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	W1218 11:53:46.703304  706399 fix.go:128] unexpected machine state, will restart: <nil>
	I1218 11:53:46.705286  706399 out.go:177] * Restarting existing kvm2 VM for "multinode-107476-m02" ...
	I1218 11:53:46.706596  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .Start
	I1218 11:53:46.706784  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring networks are active...
	I1218 11:53:46.707411  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network default is active
	I1218 11:53:46.707790  706399 main.go:141] libmachine: (multinode-107476-m02) Ensuring network mk-multinode-107476 is active
	I1218 11:53:46.708193  706399 main.go:141] libmachine: (multinode-107476-m02) Getting domain xml...
	I1218 11:53:46.708862  706399 main.go:141] libmachine: (multinode-107476-m02) Creating domain...
	I1218 11:53:47.936995  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting to get IP...
	I1218 11:53:47.937889  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:47.938288  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:47.938375  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:47.938256  706643 retry.go:31] will retry after 227.139333ms: waiting for machine to come up
	I1218 11:53:48.166820  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.167284  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.167314  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.167220  706643 retry.go:31] will retry after 375.610064ms: waiting for machine to come up
	I1218 11:53:48.544738  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.545081  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.545107  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.545047  706643 retry.go:31] will retry after 378.162219ms: waiting for machine to come up
	I1218 11:53:48.924609  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:48.925035  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:48.925066  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:48.924973  706643 retry.go:31] will retry after 372.216471ms: waiting for machine to come up
	I1218 11:53:49.298428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.298906  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.298931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.298873  706643 retry.go:31] will retry after 655.95423ms: waiting for machine to come up
	I1218 11:53:49.956567  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:49.957078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:49.957106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:49.957030  706643 retry.go:31] will retry after 860.476893ms: waiting for machine to come up
	I1218 11:53:50.819121  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:50.819479  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:50.819506  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:50.819449  706643 retry.go:31] will retry after 763.336427ms: waiting for machine to come up
	I1218 11:53:51.585019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:51.585507  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:51.585542  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:51.585441  706643 retry.go:31] will retry after 963.292989ms: waiting for machine to come up
	I1218 11:53:52.550108  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:52.550472  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:52.550529  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:52.550417  706643 retry.go:31] will retry after 1.166437684s: waiting for machine to come up
	I1218 11:53:53.718762  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:53.719219  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:53.719252  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:53.719160  706643 retry.go:31] will retry after 2.253762045s: waiting for machine to come up
	I1218 11:53:55.974428  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:55.974863  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:55.974891  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:55.974822  706643 retry.go:31] will retry after 2.547747733s: waiting for machine to come up
	I1218 11:53:58.523817  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:53:58.524293  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:53:58.524342  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:53:58.524169  706643 retry.go:31] will retry after 2.214783254s: waiting for machine to come up
	I1218 11:54:00.740859  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:00.741279  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | unable to find current IP address of domain multinode-107476-m02 in network mk-multinode-107476
	I1218 11:54:00.741308  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | I1218 11:54:00.741245  706643 retry.go:31] will retry after 4.522253429s: waiting for machine to come up
	I1218 11:54:05.267134  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.267545  706399 main.go:141] libmachine: (multinode-107476-m02) Found IP for machine: 192.168.39.238
	I1218 11:54:05.267562  706399 main.go:141] libmachine: (multinode-107476-m02) Reserving static IP address...
	I1218 11:54:05.267572  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has current primary IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.268162  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.268198  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | skip adding static IP to network mk-multinode-107476 - found existing host DHCP lease matching {name: "multinode-107476-m02", mac: "52:54:00:66:62:9b", ip: "192.168.39.238"}
	I1218 11:54:05.268217  706399 main.go:141] libmachine: (multinode-107476-m02) Reserved static IP address: 192.168.39.238
	I1218 11:54:05.268237  706399 main.go:141] libmachine: (multinode-107476-m02) Waiting for SSH to be available...
	I1218 11:54:05.268253  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Getting to WaitForSSH function...
	I1218 11:54:05.270329  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270682  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.270713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.270879  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH client type: external
	I1218 11:54:05.270921  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa (-rw-------)
	I1218 11:54:05.270945  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1218 11:54:05.270955  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | About to run SSH command:
	I1218 11:54:05.270967  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | exit 0
	I1218 11:54:05.359260  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | SSH cmd err, output: <nil>: 
	I1218 11:54:05.359669  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetConfigRaw
	I1218 11:54:05.360312  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.362713  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363152  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.363183  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.363469  706399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/multinode-107476/config.json ...
	I1218 11:54:05.363688  706399 machine.go:88] provisioning docker machine ...
	I1218 11:54:05.363708  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.363941  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364144  706399 buildroot.go:166] provisioning hostname "multinode-107476-m02"
	I1218 11:54:05.364165  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.364403  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.366681  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367078  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.367106  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.367207  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.367386  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367524  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.367640  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.367789  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.368264  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.368292  706399 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-107476-m02 && echo "multinode-107476-m02" | sudo tee /etc/hostname
	I1218 11:54:05.497634  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-107476-m02
	
	I1218 11:54:05.497668  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.500537  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.500970  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.501003  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.501203  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.501432  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501618  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.501779  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.501985  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.502309  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.502328  706399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-107476-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-107476-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-107476-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1218 11:54:05.623703  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1218 11:54:05.623739  706399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17824-683489/.minikube CaCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17824-683489/.minikube}
	I1218 11:54:05.623762  706399 buildroot.go:174] setting up certificates
	I1218 11:54:05.623773  706399 provision.go:83] configureAuth start
	I1218 11:54:05.623782  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetMachineName
	I1218 11:54:05.624072  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:05.626748  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627115  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.627143  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.627342  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.629559  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.629885  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.629931  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.630011  706399 provision.go:138] copyHostCerts
	I1218 11:54:05.630042  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630074  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem, removing ...
	I1218 11:54:05.630086  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem
	I1218 11:54:05.630147  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/ca.pem (1082 bytes)
	I1218 11:54:05.630219  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630242  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem, removing ...
	I1218 11:54:05.630249  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem
	I1218 11:54:05.630271  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/cert.pem (1123 bytes)
	I1218 11:54:05.630313  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630328  706399 exec_runner.go:144] found /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem, removing ...
	I1218 11:54:05.630334  706399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem
	I1218 11:54:05.630353  706399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17824-683489/.minikube/key.pem (1679 bytes)
	I1218 11:54:05.630395  706399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca-key.pem org=jenkins.multinode-107476-m02 san=[192.168.39.238 192.168.39.238 localhost 127.0.0.1 minikube multinode-107476-m02]
	I1218 11:54:05.741217  706399 provision.go:172] copyRemoteCerts
	I1218 11:54:05.741280  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1218 11:54:05.741305  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.744095  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744415  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.744451  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.744641  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.744867  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.745081  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.745239  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:05.832540  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1218 11:54:05.832629  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1218 11:54:05.857130  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1218 11:54:05.857201  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1218 11:54:05.880270  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1218 11:54:05.880339  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1218 11:54:05.904290  706399 provision.go:86] duration metric: configureAuth took 280.501532ms
	I1218 11:54:05.904323  706399 buildroot.go:189] setting minikube options for container-runtime
	I1218 11:54:05.904615  706399 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:54:05.904650  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:05.904939  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:05.907613  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908019  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:05.908060  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:05.908259  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:05.908465  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908634  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:05.908797  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:05.908991  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:05.909320  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:05.909336  706399 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1218 11:54:06.025905  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1218 11:54:06.025936  706399 buildroot.go:70] root file system type: tmpfs
	I1218 11:54:06.026101  706399 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1218 11:54:06.026127  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.029047  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029390  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.029429  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.029644  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.029864  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030054  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.030178  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.030331  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.030646  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.030705  706399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.124"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1218 11:54:06.156093  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.124
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1218 11:54:06.156134  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:06.159082  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159496  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:06.159528  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:06.159684  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:06.159913  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160156  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:06.160304  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:06.160478  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:06.160807  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:06.160825  706399 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1218 11:54:07.046577  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1218 11:54:07.046609  706399 machine.go:91] provisioned docker machine in 1.68290659s
	I1218 11:54:07.046627  706399 start.go:300] post-start starting for "multinode-107476-m02" (driver="kvm2")
	I1218 11:54:07.046641  706399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1218 11:54:07.046672  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.047004  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1218 11:54:07.047085  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.049936  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050337  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.050373  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.050532  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.050720  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.050893  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.051075  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.137937  706399 ssh_runner.go:195] Run: cat /etc/os-release
	I1218 11:54:07.141965  706399 command_runner.go:130] > NAME=Buildroot
	I1218 11:54:07.141990  706399 command_runner.go:130] > VERSION=2021.02.12-1-g0492d51-dirty
	I1218 11:54:07.141996  706399 command_runner.go:130] > ID=buildroot
	I1218 11:54:07.142004  706399 command_runner.go:130] > VERSION_ID=2021.02.12
	I1218 11:54:07.142016  706399 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1218 11:54:07.142062  706399 info.go:137] Remote host: Buildroot 2021.02.12
	I1218 11:54:07.142079  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/addons for local assets ...
	I1218 11:54:07.142150  706399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17824-683489/.minikube/files for local assets ...
	I1218 11:54:07.142249  706399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> 6907392.pem in /etc/ssl/certs
	I1218 11:54:07.142262  706399 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem -> /etc/ssl/certs/6907392.pem
	I1218 11:54:07.142338  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1218 11:54:07.150461  706399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/ssl/certs/6907392.pem --> /etc/ssl/certs/6907392.pem (1708 bytes)
	I1218 11:54:07.173512  706399 start.go:303] post-start completed in 126.867172ms
	I1218 11:54:07.173544  706399 fix.go:56] fixHost completed within 20.488252806s
	I1218 11:54:07.173567  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.176291  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176751  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.176783  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.176950  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.177185  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177343  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.177560  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.177727  706399 main.go:141] libmachine: Using SSH client type: native
	I1218 11:54:07.178069  706399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808ea0] 0x80bb80 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1218 11:54:07.178084  706399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1218 11:54:07.292631  706399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1702900447.242005495
	
	I1218 11:54:07.292655  706399 fix.go:206] guest clock: 1702900447.242005495
	I1218 11:54:07.292662  706399 fix.go:219] Guest: 2023-12-18 11:54:07.242005495 +0000 UTC Remote: 2023-12-18 11:54:07.173548129 +0000 UTC m=+83.636906782 (delta=68.457366ms)
	I1218 11:54:07.292718  706399 fix.go:190] guest clock delta is within tolerance: 68.457366ms
	I1218 11:54:07.292725  706399 start.go:83] releasing machines lock for "multinode-107476-m02", held for 20.607451202s
	I1218 11:54:07.292751  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.293062  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:54:07.295732  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.296145  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.296179  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.298392  706399 out.go:177] * Found network options:
	I1218 11:54:07.299731  706399 out.go:177]   - NO_PROXY=192.168.39.124
	W1218 11:54:07.301071  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.301110  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301626  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301817  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:54:07.301902  706399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1218 11:54:07.301942  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	W1218 11:54:07.302000  706399 proxy.go:119] fail to check proxy env: Error ip not in block
	I1218 11:54:07.302076  706399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1218 11:54:07.302097  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:54:07.304593  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304845  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.304987  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305018  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305124  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305254  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:54:07.305278  706399 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:54:07.305303  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305455  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:54:07.305523  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305617  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:54:07.305681  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.305742  706399 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:54:07.305842  706399 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:54:07.391351  706399 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1218 11:54:07.412687  706399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1218 11:54:07.412710  706399 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1218 11:54:07.412781  706399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1218 11:54:07.429410  706399 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1218 11:54:07.429693  706399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1218 11:54:07.429717  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.429853  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.445443  706399 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1218 11:54:07.445529  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1218 11:54:07.455706  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1218 11:54:07.465480  706399 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1218 11:54:07.465531  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1218 11:54:07.475348  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.485332  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1218 11:54:07.495743  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1218 11:54:07.505751  706399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1218 11:54:07.515919  706399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1218 11:54:07.525808  706399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1218 11:54:07.534674  706399 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1218 11:54:07.534812  706399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1218 11:54:07.544293  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:07.647636  706399 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1218 11:54:07.664455  706399 start.go:475] detecting cgroup driver to use...
	I1218 11:54:07.664544  706399 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1218 11:54:07.678392  706399 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1218 11:54:07.678419  706399 command_runner.go:130] > [Unit]
	I1218 11:54:07.678429  706399 command_runner.go:130] > Description=Docker Application Container Engine
	I1218 11:54:07.678438  706399 command_runner.go:130] > Documentation=https://docs.docker.com
	I1218 11:54:07.678446  706399 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1218 11:54:07.678454  706399 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1218 11:54:07.678468  706399 command_runner.go:130] > StartLimitBurst=3
	I1218 11:54:07.678475  706399 command_runner.go:130] > StartLimitIntervalSec=60
	I1218 11:54:07.678482  706399 command_runner.go:130] > [Service]
	I1218 11:54:07.678489  706399 command_runner.go:130] > Type=notify
	I1218 11:54:07.678499  706399 command_runner.go:130] > Restart=on-failure
	I1218 11:54:07.678506  706399 command_runner.go:130] > Environment=NO_PROXY=192.168.39.124
	I1218 11:54:07.678522  706399 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1218 11:54:07.678539  706399 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1218 11:54:07.678552  706399 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1218 11:54:07.678569  706399 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1218 11:54:07.678579  706399 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1218 11:54:07.678623  706399 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1218 11:54:07.678642  706399 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1218 11:54:07.678658  706399 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1218 11:54:07.678672  706399 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1218 11:54:07.678681  706399 command_runner.go:130] > ExecStart=
	I1218 11:54:07.678704  706399 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1218 11:54:07.678716  706399 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1218 11:54:07.678732  706399 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1218 11:54:07.678739  706399 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1218 11:54:07.678746  706399 command_runner.go:130] > LimitNOFILE=infinity
	I1218 11:54:07.678750  706399 command_runner.go:130] > LimitNPROC=infinity
	I1218 11:54:07.678754  706399 command_runner.go:130] > LimitCORE=infinity
	I1218 11:54:07.678759  706399 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1218 11:54:07.678767  706399 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1218 11:54:07.678773  706399 command_runner.go:130] > TasksMax=infinity
	I1218 11:54:07.678779  706399 command_runner.go:130] > TimeoutStartSec=0
	I1218 11:54:07.678786  706399 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1218 11:54:07.678790  706399 command_runner.go:130] > Delegate=yes
	I1218 11:54:07.678797  706399 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1218 11:54:07.678805  706399 command_runner.go:130] > KillMode=process
	I1218 11:54:07.678811  706399 command_runner.go:130] > [Install]
	I1218 11:54:07.678817  706399 command_runner.go:130] > WantedBy=multi-user.target
	I1218 11:54:07.678881  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.699422  706399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1218 11:54:07.717253  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1218 11:54:07.729421  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.740150  706399 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1218 11:54:07.771472  706399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1218 11:54:07.783922  706399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1218 11:54:07.801472  706399 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1218 11:54:07.801565  706399 ssh_runner.go:195] Run: which cri-dockerd
	I1218 11:54:07.805378  706399 command_runner.go:130] > /usr/bin/cri-dockerd
	I1218 11:54:07.805607  706399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1218 11:54:07.814619  706399 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1218 11:54:07.830501  706399 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1218 11:54:07.940117  706399 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1218 11:54:08.043122  706399 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1218 11:54:08.043192  706399 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1218 11:54:08.059638  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:08.160537  706399 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1218 11:54:09.625721  706399 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4651404s)
	I1218 11:54:09.625800  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.727037  706399 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1218 11:54:09.837890  706399 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1218 11:54:09.952084  706399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1218 11:54:10.068114  706399 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1218 11:54:10.082662  706399 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1218 11:54:10.083512  706399 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1218 11:54:10.094378  706399 command_runner.go:130] > -- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	I1218 11:54:10.094403  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094413  706399 command_runner.go:130] > Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094426  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094437  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094447  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094463  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094476  706399 command_runner.go:130] > Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094488  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094501  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094509  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094518  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094526  706399 command_runner.go:130] > Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094544  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094553  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094561  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094570  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	I1218 11:54:10.094579  706399 command_runner.go:130] > Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	I1218 11:54:10.094587  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	I1218 11:54:10.094596  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	I1218 11:54:10.094607  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	I1218 11:54:10.094618  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	I1218 11:54:10.094628  706399 command_runner.go:130] > Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	I1218 11:54:10.097238  706399 out.go:177] 
	W1218 11:54:10.099022  706399 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Mon 2023-12-18 11:53:58 UTC, ends at Mon 2023-12-18 11:54:10 UTC. --
	Dec 18 11:53:59 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:53:59 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:01 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:01 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:04 minikube systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:04 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 18 11:54:06 multinode-107476-m02 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Succeeded.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 18 11:54:10 multinode-107476-m02 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1218 11:54:10.099052  706399 out.go:239] * 
	W1218 11:54:10.099923  706399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1218 11:54:10.101451  706399 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-12-18 11:52:55 UTC, ends at Mon 2023-12-18 11:54:14 UTC. --
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525398559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525520470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.525548881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544452319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544571854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544594058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:42 multinode-107476 dockerd[827]: time="2023-12-18T11:53:42.544605950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 cri-dockerd[1042]: time="2023-12-18T11:53:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b509c1e475b06e9c062d47412c861219a821775adca61d3b54f342424644394/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 18 11:53:43 multinode-107476 cri-dockerd[1042]: time="2023-12-18T11:53:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d6381f412cd287ea043ed1bc7bbea0281bf97248c6d11131123e855abb1ac8d9/resolv.conf as [nameserver 192.168.122.1]"
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.270951082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271351233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271543854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.271763380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.289383236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.292535007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.294037889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:53:43 multinode-107476 dockerd[827]: time="2023-12-18T11:53:43.294399174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:53:58 multinode-107476 dockerd[821]: time="2023-12-18T11:53:58.368249488Z" level=info msg="ignoring event" container=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.368921160Z" level=info msg="shim disconnected" id=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 namespace=moby
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.369048203Z" level=warning msg="cleaning up after shim disconnected" id=123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08 namespace=moby
	Dec 18 11:53:58 multinode-107476 dockerd[827]: time="2023-12-18T11:53:58.369060508Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 18 11:54:12 multinode-107476 dockerd[827]: time="2023-12-18T11:54:12.705534696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 18 11:54:12 multinode-107476 dockerd[827]: time="2023-12-18T11:54:12.706083652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 18 11:54:12 multinode-107476 dockerd[827]: time="2023-12-18T11:54:12.706298873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 18 11:54:12 multinode-107476 dockerd[827]: time="2023-12-18T11:54:12.706499669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6a1164f3726f       6e38f40d628db                                                                                         2 seconds ago       Running             storage-provisioner       2                   5a2ed62879795       storage-provisioner
	4194bb8a74edb       ead0a4a53df89                                                                                         31 seconds ago      Running             coredns                   1                   d6381f412cd28       coredns-5dd5756b68-nl8xc
	c2db9601c5995       8c811b4aec35f                                                                                         31 seconds ago      Running             busybox                   1                   4b509c1e475b0       busybox-5bc68d56bd-sjq8b
	8f8819408c224       c7d1297425461                                                                                         44 seconds ago      Running             kindnet-cni               1                   8a3f2a24cd178       kindnet-6wlkb
	123ceedfce1cc       6e38f40d628db                                                                                         47 seconds ago      Exited              storage-provisioner       1                   5a2ed62879795       storage-provisioner
	f7a1971535c43       83f6cc407eed8                                                                                         47 seconds ago      Running             kube-proxy                1                   6999f04e162af       kube-proxy-jf8kx
	cdc0b5d46762e       73deb9a3f7025                                                                                         52 seconds ago      Running             etcd                      1                   3a312846e9f6f       etcd-multinode-107476
	b53866e4bc682       e3db313c6dbc0                                                                                         52 seconds ago      Running             kube-scheduler            1                   929d541b45df5       kube-scheduler-multinode-107476
	08bca6e395b93       7fe0e6f37db33                                                                                         52 seconds ago      Running             kube-apiserver            1                   41771edbf29b9       kube-apiserver-multinode-107476
	eb37efd287f8f       d058aa5ab969c                                                                                         53 seconds ago      Running             kube-controller-manager   1                   afb921712c653       kube-controller-manager-multinode-107476
	cb290feaafc5e       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   3 minutes ago       Exited              busybox                   0                   3842d71341658       busybox-5bc68d56bd-sjq8b
	8a9a67bb77c43       ead0a4a53df89                                                                                         4 minutes ago       Exited              coredns                   0                   a5499078bf2ca       coredns-5dd5756b68-nl8xc
	f6e3111557b6b       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Exited              kindnet-cni               0                   ecad224e7387c       kindnet-6wlkb
	9bd0f65050dcc       83f6cc407eed8                                                                                         4 minutes ago       Exited              kube-proxy                0                   ca78bca379ebe       kube-proxy-jf8kx
	367a10c5d07b5       e3db313c6dbc0                                                                                         5 minutes ago       Exited              kube-scheduler            0                   d06f419d4917c       kube-scheduler-multinode-107476
	fcaaf17b1eded       73deb9a3f7025                                                                                         5 minutes ago       Exited              etcd                      0                   7539f69199926       etcd-multinode-107476
	9226aa8cd1e99       7fe0e6f37db33                                                                                         5 minutes ago       Exited              kube-apiserver            0                   51c0e2b565115       kube-apiserver-multinode-107476
	4b66d146a3f47       d058aa5ab969c                                                                                         5 minutes ago       Exited              kube-controller-manager   0                   49adada57ae16       kube-controller-manager-multinode-107476
	
	* 
	* ==> coredns [4194bb8a74ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48516 - 21045 "HINFO IN 6898711184610774818.5232844636684493161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020043569s
	
	* 
	* ==> coredns [8a9a67bb77c4] <==
	* [INFO] 10.244.0.3:39580 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001880469s
	[INFO] 10.244.0.3:45076 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136652s
	[INFO] 10.244.0.3:45753 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145352s
	[INFO] 10.244.0.3:38917 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001980062s
	[INFO] 10.244.0.3:52593 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071599s
	[INFO] 10.244.0.3:47945 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133323s
	[INFO] 10.244.0.3:51814 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069265s
	[INFO] 10.244.1.2:50202 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123562s
	[INFO] 10.244.1.2:45920 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149315s
	[INFO] 10.244.1.2:37077 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00033414s
	[INFO] 10.244.1.2:42462 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098089s
	[INFO] 10.244.0.3:34819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102589s
	[INFO] 10.244.0.3:39334 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122985s
	[INFO] 10.244.0.3:36032 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000044929s
	[INFO] 10.244.0.3:49808 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066623s
	[INFO] 10.244.1.2:58102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155245s
	[INFO] 10.244.1.2:52265 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000197453s
	[INFO] 10.244.1.2:51682 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000209848s
	[INFO] 10.244.1.2:46278 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000175008s
	[INFO] 10.244.0.3:46993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110094s
	[INFO] 10.244.0.3:54791 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000086731s
	[INFO] 10.244.0.3:55681 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00008516s
	[INFO] 10.244.0.3:46353 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042946s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-107476
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107476
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-107476
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_18T11_49_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:49:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107476
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:49:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:53:33 +0000   Mon, 18 Dec 2023 11:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    multinode-107476
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 676cd36f41bf41bfb2277224047042bb
	  System UUID:                676cd36f-41bf-41bf-b227-7224047042bb
	  Boot ID:                    b2d790b8-b563-4ca9-b85c-e8ef9f11b443
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-sjq8b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 coredns-5dd5756b68-nl8xc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m45s
	  kube-system                 etcd-multinode-107476                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-6wlkb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m45s
	  kube-system                 kube-apiserver-multinode-107476             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-multinode-107476    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-jf8kx                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-scheduler-multinode-107476             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m43s                kube-proxy       
	  Normal  Starting                 46s                  kube-proxy       
	  Normal  Starting                 5m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     4m58s                kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m58s                kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s                kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m58s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m46s                node-controller  Node multinode-107476 event: Registered Node multinode-107476 in Controller
	  Normal  NodeReady                4m35s                kubelet          Node multinode-107476 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node multinode-107476 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node multinode-107476 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x7 over 54s)    kubelet          Node multinode-107476 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s                  node-controller  Node multinode-107476 event: Registered Node multinode-107476 in Controller
	
	
	Name:               multinode-107476-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-107476-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30d8ecd1811578f7b9db580c501c654c189f68d4
	                    minikube.k8s.io/name=multinode-107476
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2023_12_18T11_52_06_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Dec 2023 11:50:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-107476-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Dec 2023 11:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Dec 2023 11:50:49 +0000   Mon, 18 Dec 2023 11:50:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    multinode-107476-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 835bc9952f0441a78a73352404b4fba8
	  System UUID:                835bc995-2f04-41a7-8a73-352404b4fba8
	  Boot ID:                    370f44f2-b022-4992-a457-7f0533c2bf00
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8dg4d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kindnet-l9h8d               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-9xwh7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  RegisteredNode           3m57s                  node-controller  Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller
	  Normal  NodeHasSufficientMemory  3m57s (x5 over 3m59s)  kubelet          Node multinode-107476-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x5 over 3m59s)  kubelet          Node multinode-107476-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x5 over 3m59s)  kubelet          Node multinode-107476-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m43s                  kubelet          Node multinode-107476-m02 status is now: NodeReady
	  Normal  RegisteredNode           37s                    node-controller  Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Dec18 11:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067451] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.374087] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.402052] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152571] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.620566] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec18 11:53] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.100222] systemd-fstab-generator[525]: Ignoring "noauto" for root device
	[  +1.237028] systemd-fstab-generator[748]: Ignoring "noauto" for root device
	[  +0.284846] systemd-fstab-generator[787]: Ignoring "noauto" for root device
	[  +0.111464] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +0.119413] systemd-fstab-generator[811]: Ignoring "noauto" for root device
	[  +1.565875] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[  +0.112671] systemd-fstab-generator[998]: Ignoring "noauto" for root device
	[  +0.105237] systemd-fstab-generator[1009]: Ignoring "noauto" for root device
	[  +0.108653] systemd-fstab-generator[1020]: Ignoring "noauto" for root device
	[  +0.118694] systemd-fstab-generator[1034]: Ignoring "noauto" for root device
	[ +11.940809] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +0.411889] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.348862] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [cdc0b5d46762] <==
	* {"level":"info","ts":"2023-12-18T11:53:23.013673Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T11:53:23.013818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-18T11:53:23.014427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c switched to configuration voters=(15552116827903880748)"}
	{"level":"info","ts":"2023-12-18T11:53:23.016725Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","added-peer-id":"d7d437db3895ee2c","added-peer-peer-urls":["https://192.168.39.124:2380"]}
	{"level":"info","ts":"2023-12-18T11:53:23.017211Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1e7008e9cae601b","local-member-id":"d7d437db3895ee2c","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T11:53:23.017611Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-18T11:53:23.031314Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-18T11:53:23.033683Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d7d437db3895ee2c","initial-advertise-peer-urls":["https://192.168.39.124:2380"],"listen-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.124:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-18T11:53:23.037338Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-18T11:53:23.038868Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:53:23.042885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:53:24.354237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgPreVoteResp from d7d437db3895ee2c at term 2"}
	{"level":"info","ts":"2023-12-18T11:53:24.354667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became candidate at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.354742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c received MsgVoteResp from d7d437db3895ee2c at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.354764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d7d437db3895ee2c became leader at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.35478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d7d437db3895ee2c elected leader d7d437db3895ee2c at term 3"}
	{"level":"info","ts":"2023-12-18T11:53:24.356815Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T11:53:24.356756Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d7d437db3895ee2c","local-member-attributes":"{Name:multinode-107476 ClientURLs:[https://192.168.39.124:2379]}","request-path":"/0/members/d7d437db3895ee2c/attributes","cluster-id":"e1e7008e9cae601b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-18T11:53:24.358324Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.124:2379"}
	{"level":"info","ts":"2023-12-18T11:53:24.35855Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-18T11:53:24.359207Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-18T11:53:24.359471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-18T11:53:24.359717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [fcaaf17b1ede] <==
	* WARNING: 2023/12/18 11:51:16 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2023-12-18T11:51:17.060176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.891718ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17162246747463988395 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-107476-m03\" mod_revision:619 > success:<request_put:<key:\"/registry/minions/multinode-107476-m03\" value_size:1988 >> failure:<request_range:<key:\"/registry/minions/multinode-107476-m03\" > >>","response":"size:2057"}
	{"level":"info","ts":"2023-12-18T11:51:17.060358Z","caller":"traceutil/trace.go:171","msg":"trace[822089819] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:657; }","duration":"654.470614ms","start":"2023-12-18T11:51:16.405877Z","end":"2023-12-18T11:51:17.060348Z","steps":["trace[822089819] 'read index received'  (duration: 211.139601ms)","trace[822089819] 'applied index is now lower than readState.Index'  (duration: 443.330596ms)"],"step_count":2}
	{"level":"info","ts":"2023-12-18T11:51:17.060421Z","caller":"traceutil/trace.go:171","msg":"trace[1390095903] transaction","detail":"{read_only:false; number_of_response:1; response_revision:621; }","duration":"656.494913ms","start":"2023-12-18T11:51:16.403921Z","end":"2023-12-18T11:51:17.060416Z","steps":["trace[1390095903] 'process raft request'  (duration: 526.307885ms)","trace[1390095903] 'compare'  (duration: 129.823781ms)"],"step_count":2}
	{"level":"warn","ts":"2023-12-18T11:51:17.060461Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.403905Z","time spent":"656.530725ms","remote":"127.0.0.1:57216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2081,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-107476-m03\" mod_revision:619 > success:<request_put:<key:\"/registry/minions/multinode-107476-m03\" value_size:1988 >> failure:<request_range:<key:\"/registry/minions/multinode-107476-m03\" > >"}
	{"level":"warn","ts":"2023-12-18T11:51:17.060515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"654.631823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T11:51:17.060607Z","caller":"traceutil/trace.go:171","msg":"trace[1476158489] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:623; }","duration":"654.733302ms","start":"2023-12-18T11:51:16.405865Z","end":"2023-12-18T11:51:17.060598Z","steps":["trace[1476158489] 'agreement among raft nodes before linearized reading'  (duration: 654.571196ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.060643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.40586Z","time spent":"654.776199ms","remote":"127.0.0.1:57208","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":29,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2023-12-18T11:51:17.060718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.48129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-18T11:51:17.060737Z","caller":"traceutil/trace.go:171","msg":"trace[1087470026] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:623; }","duration":"334.514544ms","start":"2023-12-18T11:51:16.726217Z","end":"2023-12-18T11:51:17.060732Z","steps":["trace[1087470026] 'agreement among raft nodes before linearized reading'  (duration: 334.465487ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.06076Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-12-18T11:51:16.726202Z","time spent":"334.55521ms","remote":"127.0.0.1:57170","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-12-18T11:51:17.060935Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.191265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-107476-m03\" ","response":"range_response_count:1 size:2046"}
	{"level":"info","ts":"2023-12-18T11:51:17.060974Z","caller":"traceutil/trace.go:171","msg":"trace[1957339670] range","detail":"{range_begin:/registry/minions/multinode-107476-m03; range_end:; response_count:1; response_revision:623; }","duration":"158.233634ms","start":"2023-12-18T11:51:16.902735Z","end":"2023-12-18T11:51:17.060968Z","steps":["trace[1957339670] 'agreement among raft nodes before linearized reading'  (duration: 158.115731ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-18T11:51:17.061057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.161237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-107476-m03\" ","response":"range_response_count:1 size:2046"}
	{"level":"info","ts":"2023-12-18T11:51:17.061077Z","caller":"traceutil/trace.go:171","msg":"trace[2088574323] range","detail":"{range_begin:/registry/minions/multinode-107476-m03; range_end:; response_count:1; response_revision:623; }","duration":"107.183789ms","start":"2023-12-18T11:51:16.953888Z","end":"2023-12-18T11:51:17.061072Z","steps":["trace[2088574323] 'agreement among raft nodes before linearized reading'  (duration: 107.13834ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-18T11:52:16.117679Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-18T11:52:16.117782Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-107476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
	{"level":"warn","ts":"2023-12-18T11:52:16.117997Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.11804Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.118128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-18T11:52:16.118182Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-18T11:52:16.160283Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d7d437db3895ee2c","current-leader-member-id":"d7d437db3895ee2c"}
	{"level":"info","ts":"2023-12-18T11:52:16.163787Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:52:16.164184Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.124:2380"}
	{"level":"info","ts":"2023-12-18T11:52:16.164202Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-107476","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.124:2380"],"advertise-client-urls":["https://192.168.39.124:2379"]}
	
	* 
	* ==> kernel <==
	*  11:54:15 up 1 min,  0 users,  load average: 0.92, 0.38, 0.14
	Linux multinode-107476 5.10.57 #1 SMP Wed Dec 13 22:38:26 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [8f8819408c22] <==
	* I1218 11:53:32.110066       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.39 Flags: [] Table: 0} 
	I1218 11:53:42.124517       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:53:42.124538       1 main.go:227] handling current node
	I1218 11:53:42.124548       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:53:42.124552       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:53:42.124643       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:53:42.124648       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:53:52.138453       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:53:52.138560       1 main.go:227] handling current node
	I1218 11:53:52.138581       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:53:52.138595       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:53:52.138774       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:53:52.139139       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:54:02.147368       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:54:02.147493       1 main.go:227] handling current node
	I1218 11:54:02.147521       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:54:02.147536       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:54:02.148012       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:54:02.148094       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:54:12.167805       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:54:12.167919       1 main.go:227] handling current node
	I1218 11:54:12.168062       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:54:12.168117       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:54:12.168541       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:54:12.168606       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [f6e3111557b6] <==
	* I1218 11:51:39.323178       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:39.323494       1 main.go:227] handling current node
	I1218 11:51:39.323523       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:39.323627       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:39.323918       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:39.324004       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:51:49.329153       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:49.329174       1 main.go:227] handling current node
	I1218 11:51:49.329183       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:49.329188       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:49.329299       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:49.329304       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:51:59.342400       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:51:59.342422       1 main.go:227] handling current node
	I1218 11:51:59.342431       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:51:59.342435       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:51:59.342789       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:51:59.342802       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.2.0/24] 
	I1218 11:52:09.357725       1 main.go:223] Handling node with IPs: map[192.168.39.124:{}]
	I1218 11:52:09.357782       1 main.go:227] handling current node
	I1218 11:52:09.357821       1 main.go:223] Handling node with IPs: map[192.168.39.238:{}]
	I1218 11:52:09.357828       1 main.go:250] Node multinode-107476-m02 has CIDR [10.244.1.0/24] 
	I1218 11:52:09.358052       1 main.go:223] Handling node with IPs: map[192.168.39.39:{}]
	I1218 11:52:09.358059       1 main.go:250] Node multinode-107476-m03 has CIDR [10.244.3.0/24] 
	I1218 11:52:09.358104       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.39 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [08bca6e395b9] <==
	* I1218 11:53:25.739295       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1218 11:53:25.786267       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1218 11:53:25.786320       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1218 11:53:25.842501       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1218 11:53:25.888572       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1218 11:53:25.889029       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1218 11:53:25.889784       1 aggregator.go:166] initial CRD sync complete...
	I1218 11:53:25.889825       1 autoregister_controller.go:141] Starting autoregister controller
	I1218 11:53:25.889831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1218 11:53:25.889837       1 cache.go:39] Caches are synced for autoregister controller
	I1218 11:53:25.926322       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1218 11:53:25.926336       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1218 11:53:25.927884       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1218 11:53:25.929130       1 shared_informer.go:318] Caches are synced for configmaps
	I1218 11:53:25.930387       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1218 11:53:25.932424       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1218 11:53:25.940866       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1218 11:53:26.726201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1218 11:53:28.878768       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1218 11:53:29.032760       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1218 11:53:29.041687       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1218 11:53:29.122180       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1218 11:53:29.136322       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1218 11:53:38.806663       1 controller.go:624] quota admission added evaluator for: endpoints
	I1218 11:53:38.851788       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [9226aa8cd1e9] <==
	* W1218 11:52:25.229759       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.289960       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.327898       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.358046       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.404339       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.436346       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.452155       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.466931       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.469389       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.481640       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.560045       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.603910       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.647070       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.650829       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.707089       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.727369       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.733174       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.821372       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.840843       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.860014       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.885965       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:25.889747       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.011046       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.017886       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1218 11:52:26.139780       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4b66d146a3f4] <==
	* I1218 11:50:35.473295       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1218 11:50:35.500019       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8dg4d"
	I1218 11:50:35.511638       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-sjq8b"
	I1218 11:50:35.534394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.148014ms"
	I1218 11:50:35.550663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="16.153665ms"
	I1218 11:50:35.551666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="438.537µs"
	I1218 11:50:35.565307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.182µs"
	I1218 11:50:35.569676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="161.273µs"
	I1218 11:50:39.403275       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.228315ms"
	I1218 11:50:39.404080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="132.561µs"
	I1218 11:50:40.572326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.863353ms"
	I1218 11:50:40.572435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.488µs"
	I1218 11:51:16.395695       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:51:16.395903       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-107476-m03\" does not exist"
	I1218 11:51:16.724210       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107476-m03" podCIDRs=["10.244.2.0/24"]
	I1218 11:51:17.079444       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ff4bs"
	I1218 11:51:17.079894       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8hrhv"
	I1218 11:51:18.551988       1 event.go:307] "Event occurred" object="multinode-107476-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m03 event: Registered Node multinode-107476-m03 in Controller"
	I1218 11:51:18.556078       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-107476-m03"
	I1218 11:51:28.437941       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:03.593492       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:04.452239       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-107476-m03\" does not exist"
	I1218 11:52:04.455937       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:52:04.478011       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-107476-m03" podCIDRs=["10.244.3.0/24"]
	I1218 11:52:12.687766       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	
	* 
	* ==> kube-controller-manager [eb37efd287f8] <==
	* I1218 11:53:38.841411       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1218 11:53:38.841461       1 taint_manager.go:210] "Sending events to api server"
	I1218 11:53:38.843043       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1218 11:53:38.844603       1 event.go:307] "Event occurred" object="multinode-107476" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476 event: Registered Node multinode-107476 in Controller"
	I1218 11:53:38.844807       1 event.go:307] "Event occurred" object="multinode-107476-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m02 event: Registered Node multinode-107476-m02 in Controller"
	I1218 11:53:38.844819       1 event.go:307] "Event occurred" object="multinode-107476-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-107476-m03 event: Registered Node multinode-107476-m03 in Controller"
	I1218 11:53:38.845602       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1218 11:53:38.845709       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1218 11:53:38.852796       1 shared_informer.go:318] Caches are synced for GC
	I1218 11:53:38.856476       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1218 11:53:38.906129       1 shared_informer.go:318] Caches are synced for attach detach
	I1218 11:53:38.926121       1 shared_informer.go:318] Caches are synced for stateful set
	I1218 11:53:38.962810       1 shared_informer.go:318] Caches are synced for daemon sets
	I1218 11:53:39.007122       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 11:53:39.043476       1 shared_informer.go:318] Caches are synced for resource quota
	I1218 11:53:39.387873       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 11:53:39.390328       1 shared_informer.go:318] Caches are synced for garbage collector
	I1218 11:53:39.390377       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1218 11:53:44.207241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.012508ms"
	I1218 11:53:44.208423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.203µs"
	I1218 11:53:44.235406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.968µs"
	I1218 11:53:44.281516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.146942ms"
	I1218 11:53:44.281911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.705µs"
	I1218 11:54:13.261272       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-107476-m02"
	I1218 11:54:13.848861       1 event.go:307] "Event occurred" object="multinode-107476-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-107476-m03 event: Removing Node multinode-107476-m03 from Controller"
	
	* 
	* ==> kube-proxy [9bd0f65050dc] <==
	* I1218 11:49:31.222660       1 server_others.go:69] "Using iptables proxy"
	I1218 11:49:31.233090       1 node.go:141] Successfully retrieved node IP: 192.168.39.124
	I1218 11:49:31.272528       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 11:49:31.272850       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 11:49:31.276004       1 server_others.go:152] "Using iptables Proxier"
	I1218 11:49:31.276152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 11:49:31.276675       1 server.go:846] "Version info" version="v1.28.4"
	I1218 11:49:31.276713       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:49:31.277461       1 config.go:188] "Starting service config controller"
	I1218 11:49:31.277519       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 11:49:31.277893       1 config.go:97] "Starting endpoint slice config controller"
	I1218 11:49:31.278110       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 11:49:31.279088       1 config.go:315] "Starting node config controller"
	I1218 11:49:31.279128       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 11:49:31.377652       1 shared_informer.go:318] Caches are synced for service config
	I1218 11:49:31.378886       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 11:49:31.379292       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f7a1971535c4] <==
	* I1218 11:53:27.735307       1 server_others.go:69] "Using iptables proxy"
	I1218 11:53:27.761237       1 node.go:141] Successfully retrieved node IP: 192.168.39.124
	I1218 11:53:28.254132       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1218 11:53:28.254488       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1218 11:53:28.261044       1 server_others.go:152] "Using iptables Proxier"
	I1218 11:53:28.261584       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1218 11:53:28.262768       1 server.go:846] "Version info" version="v1.28.4"
	I1218 11:53:28.263033       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:53:28.264498       1 config.go:188] "Starting service config controller"
	I1218 11:53:28.265311       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1218 11:53:28.265565       1 config.go:97] "Starting endpoint slice config controller"
	I1218 11:53:28.265643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1218 11:53:28.266498       1 config.go:315] "Starting node config controller"
	I1218 11:53:28.276097       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1218 11:53:28.276785       1 shared_informer.go:318] Caches are synced for node config
	I1218 11:53:28.366502       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1218 11:53:28.366545       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [367a10c5d07b] <==
	* W1218 11:49:13.938944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1218 11:49:13.939033       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1218 11:49:14.010047       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1218 11:49:14.010100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1218 11:49:14.102943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1218 11:49:14.102966       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1218 11:49:14.155521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1218 11:49:14.155645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1218 11:49:14.232934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1218 11:49:14.232963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1218 11:49:14.270424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1218 11:49:14.270773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1218 11:49:14.335962       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1218 11:49:14.336231       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 11:49:14.356302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1218 11:49:14.356353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1218 11:49:14.439154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1218 11:49:14.439626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1218 11:49:14.452855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1218 11:49:14.453140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1218 11:49:17.083962       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 11:52:16.033269       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1218 11:52:16.033377       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1218 11:52:16.033788       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1218 11:52:16.034102       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [b53866e4bc68] <==
	* I1218 11:53:23.436781       1 serving.go:348] Generated self-signed cert in-memory
	W1218 11:53:25.831707       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1218 11:53:25.832162       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1218 11:53:25.832349       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1218 11:53:25.832425       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1218 11:53:25.868491       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1218 11:53:25.868618       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1218 11:53:25.871792       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1218 11:53:25.872151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1218 11:53:25.872592       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1218 11:53:25.874246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1218 11:53:25.973297       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-12-18 11:52:55 UTC, ends at Mon 2023-12-18 11:54:15 UTC. --
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.073349    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.073416    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:30.073401498 +0000 UTC m=+9.870107509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173701    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173732    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:28 multinode-107476 kubelet[1290]: E1218 11:53:28.173779    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:30.173765772 +0000 UTC m=+9.970471783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.091805    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.092668    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:34.092646533 +0000 UTC m=+13.889352536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192257    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192316    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.192364    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:34.192351041 +0000 UTC m=+13.989057052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.908177    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-nl8xc" podUID="17cd3c37-30e8-4d98-81f5-44f58135adf3"
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: I1218 11:53:30.908582    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3f2a24cd178f5c3f5a7b488f9fc08e20ab1568158a073df513cb48f1ad5398"
	Dec 18 11:53:30 multinode-107476 kubelet[1290]: E1218 11:53:30.910521    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-sjq8b" podUID="6cb993f3-a977-45b8-a535-f0056d2d7e8b"
	Dec 18 11:53:32 multinode-107476 kubelet[1290]: E1218 11:53:32.556053    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-nl8xc" podUID="17cd3c37-30e8-4d98-81f5-44f58135adf3"
	Dec 18 11:53:32 multinode-107476 kubelet[1290]: E1218 11:53:32.556252    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-sjq8b" podUID="6cb993f3-a977-45b8-a535-f0056d2d7e8b"
	Dec 18 11:53:33 multinode-107476 kubelet[1290]: I1218 11:53:33.401542    1290 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.127648    1290 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.128318    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume podName:17cd3c37-30e8-4d98-81f5-44f58135adf3 nodeName:}" failed. No retries permitted until 2023-12-18 11:53:42.1282954 +0000 UTC m=+21.925001404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/17cd3c37-30e8-4d98-81f5-44f58135adf3-config-volume") pod "coredns-5dd5756b68-nl8xc" (UID: "17cd3c37-30e8-4d98-81f5-44f58135adf3") : object "kube-system"/"coredns" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228689    1290 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228758    1290 projected.go:198] Error preparing data for projected volume kube-api-access-ptpr6 for pod default/busybox-5bc68d56bd-sjq8b: object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:34 multinode-107476 kubelet[1290]: E1218 11:53:34.228810    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6 podName:6cb993f3-a977-45b8-a535-f0056d2d7e8b nodeName:}" failed. No retries permitted until 2023-12-18 11:53:42.228796297 +0000 UTC m=+22.025502309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ptpr6" (UniqueName: "kubernetes.io/projected/6cb993f3-a977-45b8-a535-f0056d2d7e8b-kube-api-access-ptpr6") pod "busybox-5bc68d56bd-sjq8b" (UID: "6cb993f3-a977-45b8-a535-f0056d2d7e8b") : object "default"/"kube-root-ca.crt" not registered
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: I1218 11:53:59.402656    1290 scope.go:117] "RemoveContainer" containerID="de7401b83d12863f008a4b978b770f3f7b4062c46372c4e00e2467eb6e5f0ba2"
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: I1218 11:53:59.405218    1290 scope.go:117] "RemoveContainer" containerID="123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08"
	Dec 18 11:53:59 multinode-107476 kubelet[1290]: E1218 11:53:59.407333    1290 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e04ec19d-39a8-4849-b604-8e46b7f9cea3)\"" pod="kube-system/storage-provisioner" podUID="e04ec19d-39a8-4849-b604-8e46b7f9cea3"
	Dec 18 11:54:12 multinode-107476 kubelet[1290]: I1218 11:54:12.558222    1290 scope.go:117] "RemoveContainer" containerID="123ceedfce1ccd5f27ac8b7368fca1d6cacecf05d48983a4f7aa454d139d8b08"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-107476 -n multinode-107476
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-107476 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (3.26s)

                                                
                                    

Test pass (292/328)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 50.42
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 16.08
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.2/json-events 42.29
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.16
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.58
27 TestOffline 135.9
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 221.8
34 TestAddons/parallel/Registry 20.08
35 TestAddons/parallel/Ingress 25.01
36 TestAddons/parallel/InspektorGadget 11.07
37 TestAddons/parallel/MetricsServer 6.9
38 TestAddons/parallel/HelmTiller 23.64
40 TestAddons/parallel/CSI 76.82
41 TestAddons/parallel/Headlamp 18.85
42 TestAddons/parallel/CloudSpanner 5.54
43 TestAddons/parallel/LocalPath 13.34
44 TestAddons/parallel/NvidiaDevicePlugin 6.55
47 TestAddons/serial/GCPAuth/Namespaces 0.14
48 TestAddons/StoppedEnableDisable 13.43
49 TestCertOptions 64
50 TestCertExpiration 279.01
51 TestDockerFlags 80.53
52 TestForceSystemdFlag 55.79
53 TestForceSystemdEnv 95.04
55 TestKVMDriverInstallOrUpdate 3.65
59 TestErrorSpam/setup 49.35
60 TestErrorSpam/start 0.4
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.23
63 TestErrorSpam/unpause 1.43
64 TestErrorSpam/stop 4.27
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 108.73
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 36.7
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.26
76 TestFunctional/serial/CacheCmd/cache/add_local 1.52
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.31
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 42.95
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.15
87 TestFunctional/serial/LogsFileCmd 1.16
88 TestFunctional/serial/InvalidService 5.17
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 15.7
92 TestFunctional/parallel/DryRun 0.33
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 1.27
98 TestFunctional/parallel/ServiceCmdConnect 30.58
99 TestFunctional/parallel/AddonsCmd 0.17
100 TestFunctional/parallel/PersistentVolumeClaim 56.84
102 TestFunctional/parallel/SSHCmd 0.49
103 TestFunctional/parallel/CpCmd 1.57
104 TestFunctional/parallel/MySQL 35.44
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.59
110 TestFunctional/parallel/NodeLabels 0.11
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
114 TestFunctional/parallel/License 0.64
115 TestFunctional/parallel/DockerEnv/bash 1.08
125 TestFunctional/parallel/ServiceCmd/DeployApp 29.17
126 TestFunctional/parallel/ServiceCmd/List 0.48
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ProfileCmd/profile_list 0.33
131 TestFunctional/parallel/ServiceCmd/Format 0.42
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
133 TestFunctional/parallel/ServiceCmd/URL 0.6
134 TestFunctional/parallel/MountCmd/any-port 9.76
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 0.77
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.59
145 TestFunctional/parallel/ImageCommands/Setup 2.12
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.23
147 TestFunctional/parallel/MountCmd/specific-port 1.95
148 TestFunctional/parallel/MountCmd/VerifyCleanup 0.87
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.91
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.9
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.21
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.43
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.4
155 TestFunctional/delete_addon-resizer_images 0.07
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
158 TestGvisorAddon 343.91
161 TestImageBuild/serial/Setup 54.08
162 TestImageBuild/serial/NormalBuild 2.55
163 TestImageBuild/serial/BuildWithBuildArg 1.45
164 TestImageBuild/serial/BuildWithDockerIgnore 0.43
165 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.33
168 TestIngressAddonLegacy/StartLegacyK8sCluster 94.54
170 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.49
171 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.58
172 TestIngressAddonLegacy/serial/ValidateIngressAddons 49.05
175 TestJSONOutput/start/Command 69.4
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 0.61
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 0.57
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 8.12
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 0.23
203 TestMainNoArgs 0.06
204 TestMinikubeProfile 105.44
207 TestMountStart/serial/StartWithMountFirst 29.91
208 TestMountStart/serial/VerifyMountFirst 0.4
209 TestMountStart/serial/StartWithMountSecond 31.2
210 TestMountStart/serial/VerifyMountSecond 0.43
211 TestMountStart/serial/DeleteFirst 0.88
212 TestMountStart/serial/VerifyMountPostDelete 0.42
213 TestMountStart/serial/Stop 2.1
214 TestMountStart/serial/RestartStopped 26.83
215 TestMountStart/serial/VerifyMountPostStop 0.41
218 TestMultiNode/serial/FreshStart2Nodes 130.41
219 TestMultiNode/serial/DeployApp2Nodes 7.03
220 TestMultiNode/serial/PingHostFrom2Pods 0.97
221 TestMultiNode/serial/AddNode 47.7
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.23
224 TestMultiNode/serial/CopyFile 8
225 TestMultiNode/serial/StopNode 4.01
226 TestMultiNode/serial/StartAfterStop 32.42
229 TestMultiNode/serial/StopMultiNode 111.78
230 TestMultiNode/serial/RestartMultiNode 106.22
231 TestMultiNode/serial/ValidateNameConflict 54.26
236 TestPreload 190.4
238 TestScheduledStopUnix 123.95
239 TestSkaffold 148.01
242 TestRunningBinaryUpgrade 242.13
244 TestKubernetesUpgrade 189.54
258 TestPause/serial/Start 125.51
266 TestPause/serial/SecondStartNoReconfiguration 45.64
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
269 TestNoKubernetes/serial/StartWithK8s 64.29
270 TestPause/serial/Pause 0.65
271 TestPause/serial/VerifyStatus 0.85
272 TestPause/serial/Unpause 0.58
273 TestStoppedBinaryUpgrade/Setup 1.84
274 TestPause/serial/PauseAgain 0.92
275 TestPause/serial/DeletePaused 1.23
276 TestStoppedBinaryUpgrade/Upgrade 247.98
277 TestPause/serial/VerifyDeletedResources 4.2
278 TestNetworkPlugins/group/auto/Start 101.27
279 TestNetworkPlugins/group/kindnet/Start 135.43
280 TestNoKubernetes/serial/StartWithStopK8s 34.32
281 TestNoKubernetes/serial/Start 36.2
282 TestNetworkPlugins/group/auto/KubeletFlags 0.37
283 TestNetworkPlugins/group/auto/NetCatPod 15.49
284 TestNetworkPlugins/group/auto/DNS 0.24
285 TestNetworkPlugins/group/auto/Localhost 0.16
286 TestNetworkPlugins/group/auto/HairPin 0.29
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
288 TestNoKubernetes/serial/ProfileList 1.31
289 TestNoKubernetes/serial/Stop 2.48
290 TestNoKubernetes/serial/StartNoArgs 25.61
291 TestNetworkPlugins/group/calico/Start 119.13
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
294 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
296 TestNetworkPlugins/group/kindnet/DNS 0.2
297 TestNetworkPlugins/group/kindnet/Localhost 0.19
298 TestNetworkPlugins/group/kindnet/HairPin 0.17
299 TestNetworkPlugins/group/false/Start 144.47
300 TestNetworkPlugins/group/enable-default-cni/Start 161.29
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.46
302 TestNetworkPlugins/group/flannel/Start 120.4
303 TestNetworkPlugins/group/calico/ControllerPod 5.11
304 TestNetworkPlugins/group/calico/KubeletFlags 0.29
305 TestNetworkPlugins/group/calico/NetCatPod 13.36
306 TestNetworkPlugins/group/calico/DNS 0.32
307 TestNetworkPlugins/group/calico/Localhost 0.19
308 TestNetworkPlugins/group/calico/HairPin 0.2
309 TestNetworkPlugins/group/bridge/Start 95.51
310 TestNetworkPlugins/group/false/KubeletFlags 0.24
311 TestNetworkPlugins/group/false/NetCatPod 12.3
312 TestNetworkPlugins/group/false/DNS 0.27
313 TestNetworkPlugins/group/false/Localhost 0.23
314 TestNetworkPlugins/group/false/HairPin 0.22
315 TestNetworkPlugins/group/kubenet/Start 91.41
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
321 TestNetworkPlugins/group/custom-flannel/Start 99.48
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
324 TestNetworkPlugins/group/flannel/NetCatPod 16.3
325 TestNetworkPlugins/group/flannel/DNS 0.19
326 TestNetworkPlugins/group/flannel/Localhost 0.17
327 TestNetworkPlugins/group/flannel/HairPin 0.18
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
329 TestNetworkPlugins/group/bridge/NetCatPod 13.27
330 TestNetworkPlugins/group/bridge/DNS 0.27
331 TestNetworkPlugins/group/bridge/Localhost 0.24
332 TestNetworkPlugins/group/bridge/HairPin 0.27
334 TestStartStop/group/old-k8s-version/serial/FirstStart 143.03
335 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
336 TestNetworkPlugins/group/kubenet/NetCatPod 13.44
338 TestStartStop/group/no-preload/serial/FirstStart 122.53
339 TestNetworkPlugins/group/kubenet/DNS 0.2
340 TestNetworkPlugins/group/kubenet/Localhost 0.17
341 TestNetworkPlugins/group/kubenet/HairPin 0.18
343 TestStartStop/group/embed-certs/serial/FirstStart 93.34
344 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
345 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
346 TestNetworkPlugins/group/custom-flannel/DNS 0.24
347 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
348 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.62
351 TestStartStop/group/no-preload/serial/DeployApp 10.4
352 TestStartStop/group/old-k8s-version/serial/DeployApp 9.45
353 TestStartStop/group/embed-certs/serial/DeployApp 10.38
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
355 TestStartStop/group/no-preload/serial/Stop 13.14
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.9
357 TestStartStop/group/old-k8s-version/serial/Stop 13.15
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
359 TestStartStop/group/embed-certs/serial/Stop 13.13
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
361 TestStartStop/group/no-preload/serial/SecondStart 337.96
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
363 TestStartStop/group/old-k8s-version/serial/SecondStart 466.72
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
365 TestStartStop/group/embed-certs/serial/SecondStart 355.45
366 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.21
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
375 TestStartStop/group/no-preload/serial/Pause 2.84
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
378 TestStartStop/group/newest-cni/serial/FirstStart 72.59
379 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
380 TestStartStop/group/embed-certs/serial/Pause 3.06
381 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.9
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
387 TestStartStop/group/newest-cni/serial/Stop 13.15
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
389 TestStartStop/group/newest-cni/serial/SecondStart 51.86
390 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
391 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
392 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
393 TestStartStop/group/old-k8s-version/serial/Pause 2.56
394 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
397 TestStartStop/group/newest-cni/serial/Pause 2.44
x
+
TestDownloadOnly/v1.16.0/json-events (50.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (50.419524027s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (50.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-191704
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-191704: exit status 85 (86.900103ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:27 UTC |          |
	|         | -p download-only-191704        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:27:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:27:16.787207  690751 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:27:16.787490  690751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:27:16.787499  690751 out.go:309] Setting ErrFile to fd 2...
	I1218 11:27:16.787503  690751 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:27:16.787734  690751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	W1218 11:27:16.787858  690751 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: open /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: no such file or directory
	I1218 11:27:16.788434  690751 out.go:303] Setting JSON to true
	I1218 11:27:16.789718  690751 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11383,"bootTime":1702887454,"procs":469,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:27:16.789790  690751 start.go:138] virtualization: kvm guest
	I1218 11:27:16.792431  690751 out.go:97] [download-only-191704] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:27:16.792519  690751 notify.go:220] Checking for updates...
	I1218 11:27:16.794244  690751 out.go:169] MINIKUBE_LOCATION=17824
	W1218 11:27:16.792641  690751 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball: no such file or directory
	I1218 11:27:16.797254  690751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:27:16.798809  690751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:27:16.800428  690751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:27:16.801933  690751 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 11:27:16.804698  690751 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:27:16.804965  690751 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:27:16.837907  690751 out.go:97] Using the kvm2 driver based on user configuration
	I1218 11:27:16.837935  690751 start.go:298] selected driver: kvm2
	I1218 11:27:16.837949  690751 start.go:902] validating driver "kvm2" against <nil>
	I1218 11:27:16.838415  690751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:27:16.838531  690751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:27:16.853462  690751 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:27:16.853533  690751 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1218 11:27:16.854136  690751 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1218 11:27:16.854314  690751 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1218 11:27:16.854403  690751 cni.go:84] Creating CNI manager for ""
	I1218 11:27:16.854426  690751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1218 11:27:16.854442  690751 start_flags.go:323] config:
	{Name:download-only-191704 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-191704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:27:16.854803  690751 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:27:16.856850  690751 out.go:97] Downloading VM boot image ...
	I1218 11:27:16.856909  690751 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/iso/amd64/minikube-v1.32.1-1702490427-17765-amd64.iso
	I1218 11:27:26.125610  690751 out.go:97] Starting control plane node download-only-191704 in cluster download-only-191704
	I1218 11:27:26.125655  690751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:27:26.237045  690751 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1218 11:27:26.237084  690751 cache.go:56] Caching tarball of preloaded images
	I1218 11:27:26.237304  690751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:27:26.239536  690751 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1218 11:27:26.239565  690751 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:27:26.776231  690751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1218 11:27:37.572699  690751 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:27:37.573535  690751 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:27:38.331999  690751 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1218 11:27:38.332382  690751 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/download-only-191704/config.json ...
	I1218 11:27:38.332419  690751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/download-only-191704/config.json: {Name:mk815f1a7f72ddf7468dbe37d7dacf973d056269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1218 11:27:38.332615  690751 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1218 11:27:38.332831  690751 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-191704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (16.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (16.077648743s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (16.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-191704
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-191704: exit status 85 (79.899751ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:27 UTC |          |
	|         | -p download-only-191704        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:28 UTC |          |
	|         | -p download-only-191704        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:28:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:28:07.295195  690902 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:28:07.295336  690902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:28:07.295346  690902 out.go:309] Setting ErrFile to fd 2...
	I1218 11:28:07.295351  690902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:28:07.295550  690902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	W1218 11:28:07.295689  690902 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: open /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: no such file or directory
	I1218 11:28:07.296122  690902 out.go:303] Setting JSON to true
	I1218 11:28:07.297259  690902 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11433,"bootTime":1702887454,"procs":465,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:28:07.297325  690902 start.go:138] virtualization: kvm guest
	I1218 11:28:07.299571  690902 out.go:97] [download-only-191704] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:28:07.301576  690902 out.go:169] MINIKUBE_LOCATION=17824
	I1218 11:28:07.299789  690902 notify.go:220] Checking for updates...
	I1218 11:28:07.305001  690902 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:28:07.306824  690902 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:28:07.308343  690902 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:28:07.309838  690902 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 11:28:07.312504  690902 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:28:07.312953  690902 config.go:182] Loaded profile config "download-only-191704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1218 11:28:07.313008  690902 start.go:810] api.Load failed for download-only-191704: filestore "download-only-191704": Docker machine "download-only-191704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:28:07.313127  690902 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 11:28:07.313167  690902 start.go:810] api.Load failed for download-only-191704: filestore "download-only-191704": Docker machine "download-only-191704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:28:07.345334  690902 out.go:97] Using the kvm2 driver based on existing profile
	I1218 11:28:07.345367  690902 start.go:298] selected driver: kvm2
	I1218 11:28:07.345373  690902 start.go:902] validating driver "kvm2" against &{Name:download-only-191704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-191704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:28:07.345786  690902 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:28:07.345872  690902 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:28:07.360705  690902 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:28:07.361453  690902 cni.go:84] Creating CNI manager for ""
	I1218 11:28:07.361476  690902 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:28:07.361489  690902 start_flags.go:323] config:
	{Name:download-only-191704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-191704 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:28:07.361621  690902 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:28:07.363381  690902 out.go:97] Starting control plane node download-only-191704 in cluster download-only-191704
	I1218 11:28:07.363402  690902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:28:07.467860  690902 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:28:07.467895  690902 cache.go:56] Caching tarball of preloaded images
	I1218 11:28:07.468041  690902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1218 11:28:07.470184  690902 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1218 11:28:07.470198  690902 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:28:07.582155  690902 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1218 11:28:21.443239  690902 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:28:21.443338  690902 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-191704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (42.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-191704 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 : (42.291452466s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (42.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-191704
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-191704: exit status 85 (82.878314ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:27 UTC |          |
	|         | -p download-only-191704           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:28 UTC |          |
	|         | -p download-only-191704           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-191704 | jenkins | v1.32.0 | 18 Dec 23 11:28 UTC |          |
	|         | -p download-only-191704           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/18 11:28:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1218 11:28:23.455596  690969 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:28:23.455820  690969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:28:23.455830  690969 out.go:309] Setting ErrFile to fd 2...
	I1218 11:28:23.455838  690969 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:28:23.456077  690969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	W1218 11:28:23.456228  690969 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: open /home/jenkins/minikube-integration/17824-683489/.minikube/config/config.json: no such file or directory
	I1218 11:28:23.456723  690969 out.go:303] Setting JSON to true
	I1218 11:28:23.457939  690969 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11449,"bootTime":1702887454,"procs":466,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:28:23.458012  690969 start.go:138] virtualization: kvm guest
	I1218 11:28:23.460468  690969 out.go:97] [download-only-191704] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:28:23.462443  690969 out.go:169] MINIKUBE_LOCATION=17824
	I1218 11:28:23.460716  690969 notify.go:220] Checking for updates...
	I1218 11:28:23.465583  690969 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:28:23.467174  690969 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:28:23.468713  690969 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:28:23.470138  690969 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1218 11:28:23.472644  690969 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1218 11:28:23.473138  690969 config.go:182] Loaded profile config "download-only-191704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1218 11:28:23.473203  690969 start.go:810] api.Load failed for download-only-191704: filestore "download-only-191704": Docker machine "download-only-191704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:28:23.473298  690969 driver.go:392] Setting default libvirt URI to qemu:///system
	W1218 11:28:23.473349  690969 start.go:810] api.Load failed for download-only-191704: filestore "download-only-191704": Docker machine "download-only-191704" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1218 11:28:23.505011  690969 out.go:97] Using the kvm2 driver based on existing profile
	I1218 11:28:23.505042  690969 start.go:298] selected driver: kvm2
	I1218 11:28:23.505049  690969 start.go:902] validating driver "kvm2" against &{Name:download-only-191704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-191704 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:28:23.505543  690969 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:28:23.505634  690969 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17824-683489/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1218 11:28:23.520078  690969 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1218 11:28:23.521046  690969 cni.go:84] Creating CNI manager for ""
	I1218 11:28:23.521074  690969 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1218 11:28:23.521102  690969 start_flags.go:323] config:
	{Name:download-only-191704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-191704 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:28:23.521306  690969 iso.go:125] acquiring lock: {Name:mk77379b84c746649cc72ce2f2c3817c5150de49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1218 11:28:23.523156  690969 out.go:97] Starting control plane node download-only-191704 in cluster download-only-191704
	I1218 11:28:23.523178  690969 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1218 11:28:24.029578  690969 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1218 11:28:24.029617  690969 cache.go:56] Caching tarball of preloaded images
	I1218 11:28:24.029838  690969 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1218 11:28:24.032117  690969 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1218 11:28:24.032150  690969 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:28:24.142749  690969 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:d472e9d5f1548dd0d68eb75b714c5436 -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1218 11:28:34.843049  690969 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:28:34.843149  690969 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17824-683489/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1218 11:28:35.586419  690969 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1218 11:28:35.586577  690969 profile.go:148] Saving config to /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/download-only-191704/config.json ...
	I1218 11:28:35.586794  690969 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1218 11:28:35.586979  690969 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17824-683489/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-191704"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-191704
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-337018 --alsologtostderr --binary-mirror http://127.0.0.1:45291 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-337018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-337018
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (135.9s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-288297 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-288297 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m15.012638245s)
helpers_test.go:175: Cleaning up "offline-docker-288297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-288297
--- PASS: TestOffline (135.90s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-694092
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-694092: exit status 85 (65.346478ms)

                                                
                                                
-- stdout --
	* Profile "addons-694092" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694092"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-694092
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-694092: exit status 85 (65.847181ms)

                                                
                                                
-- stdout --
	* Profile "addons-694092" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-694092"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (221.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-694092 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-694092 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m41.800650701s)
--- PASS: TestAddons/Setup (221.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 26.606391ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wjbwt" [06775cec-0910-4fd0-9962-55ca4205c34f] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006757086s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vhc5b" [370ec375-e60c-4d87-97bc-cd2c32c2f255] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005702011s
addons_test.go:339: (dbg) Run:  kubectl --context addons-694092 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-694092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-694092 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.087721485s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 ip
2023/12/18 11:33:07 [DEBUG] GET http://192.168.39.156:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-694092 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-694092 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-694092 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d7af64a6-7371-4069-85a6-edc13948a103] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d7af64a6-7371-4069-85a6-edc13948a103] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.006949422s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-694092 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.156
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-694092 addons disable ingress-dns --alsologtostderr -v=1: (1.960033594s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-694092 addons disable ingress --alsologtostderr -v=1: (7.789120631s)
--- PASS: TestAddons/parallel/Ingress (25.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-smh4b" [de29071a-d731-4d7d-872d-8ecb60dad09d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008258764s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-694092
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-694092: (6.061123932s)
--- PASS: TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 27.457986ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-rvjml" [ec7517be-68f5-4e64-9635-d9363a5efb6a] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005038455s
addons_test.go:414: (dbg) Run:  kubectl --context addons-694092 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (23.64s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.375498ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9f2l5" [616db9c9-dc44-4707-8073-00f64cf0588e] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009348115s
addons_test.go:472: (dbg) Run:  kubectl --context addons-694092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-694092 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (17.971711662s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (23.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (76.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 36.228575ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-694092 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-694092 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [510d33f7-02c2-495e-bb70-60dbd78c7486] Pending
helpers_test.go:344: "task-pv-pod" [510d33f7-02c2-495e-bb70-60dbd78c7486] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [510d33f7-02c2-495e-bb70-60dbd78c7486] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.003965523s
addons_test.go:583: (dbg) Run:  kubectl --context addons-694092 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-694092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-694092 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-694092 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-694092 delete pod task-pv-pod: (1.254032386s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-694092 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-694092 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-694092 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [671b8096-623d-414a-8682-43bf72dbc280] Pending
helpers_test.go:344: "task-pv-pod-restore" [671b8096-623d-414a-8682-43bf72dbc280] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [671b8096-623d-414a-8682-43bf72dbc280] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005036137s
addons_test.go:625: (dbg) Run:  kubectl --context addons-694092 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-694092 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-694092 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-694092 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.748398628s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (76.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-694092 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-694092 --alsologtostderr -v=1: (1.844964493s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-2268m" [f23f119a-d5ef-456c-9500-64b028db667a] Pending
helpers_test.go:344: "headlamp-777fd4b855-2268m" [f23f119a-d5ef-456c-9500-64b028db667a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-2268m" [f23f119a-d5ef-456c-9500-64b028db667a] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.004416044s
--- PASS: TestAddons/parallel/Headlamp (18.85s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-wsbtv" [dbf7be5b-775b-423c-a8fb-ee4587033bde] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004104772s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-694092
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-694092 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-694092 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [079d25b3-aa41-460b-ad40-8717f19651bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [079d25b3-aa41-460b-ad40-8717f19651bd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [079d25b3-aa41-460b-ad40-8717f19651bd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006289928s
addons_test.go:890: (dbg) Run:  kubectl --context addons-694092 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 ssh "cat /opt/local-path-provisioner/pvc-b2ca6e1b-64ac-435e-aef4-8e757a76aa9b_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-694092 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-694092 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-694092 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bmbrb" [99f27bb7-e734-45d0-92e3-788dd5d7da50] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006360601s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-694092
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-694092 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-694092 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-694092
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-694092: (13.114577297s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-694092
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-694092
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-694092
--- PASS: TestAddons/StoppedEnableDisable (13.43s)

                                                
                                    
x
+
TestCertOptions (64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-169857 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-169857 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m2.355515935s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-169857 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-169857 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-169857 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-169857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-169857
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-169857: (1.088497336s)
--- PASS: TestCertOptions (64.00s)

                                                
                                    
x
+
TestCertExpiration (279.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-779937 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-779937 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m6.162530527s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-779937 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1218 12:12:56.316198  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-779937 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (31.608426242s)
helpers_test.go:175: Cleaning up "cert-expiration-779937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-779937
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-779937: (1.241961242s)
--- PASS: TestCertExpiration (279.01s)

                                                
                                    
x
+
TestDockerFlags (80.53s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-613232 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-613232 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m18.714944872s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-613232 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-613232 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-613232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-613232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-613232: (1.136258768s)
--- PASS: TestDockerFlags (80.53s)

                                                
                                    
x
+
TestForceSystemdFlag (55.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-333322 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-333322 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (54.271454207s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-333322 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-333322" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-333322
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-333322: (1.23973149s)
--- PASS: TestForceSystemdFlag (55.79s)

                                                
                                    
x
+
TestForceSystemdEnv (95.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-704755 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1218 12:07:48.660400  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 12:07:56.316522  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-704755 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m33.452342823s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-704755 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-704755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-704755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-704755: (1.300392743s)
--- PASS: TestForceSystemdEnv (95.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                    
x
+
TestErrorSpam/setup (49.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-402595 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402595 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-402595 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402595 --driver=kvm2 : (49.352045257s)
--- PASS: TestErrorSpam/setup (49.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (4.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 stop: (4.100994644s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402595 --log_dir /tmp/nospam-402595 stop
--- PASS: TestErrorSpam/stop (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17824-683489/.minikube/files/etc/test/nested/copy/690739/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (108.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-622176 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m48.730447381s)
--- PASS: TestFunctional/serial/StartWithProxy (108.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --alsologtostderr -v=8
E1218 11:37:48.660501  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.666488  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.676834  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.697171  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.737576  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.817971  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:48.978456  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:49.299079  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:49.940094  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:51.221148  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:53.782061  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:37:58.903023  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-622176 --alsologtostderr -v=8: (36.701329583s)
functional_test.go:659: soft start took 36.701978394s for "functional-622176" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-622176 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-622176 /tmp/TestFunctionalserialCacheCmdcacheadd_local2331208188/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache add minikube-local-cache-test:functional-622176
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 cache add minikube-local-cache-test:functional-622176: (1.181225321s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache delete minikube-local-cache-test:functional-622176
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-622176
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (247.532517ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 kubectl -- --context functional-622176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-622176 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1218 11:38:09.144226  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 11:38:29.625335  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-622176 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.950261482s)
functional_test.go:757: restart took 42.950413885s for "functional-622176" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-622176 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 logs: (1.152508384s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 logs --file /tmp/TestFunctionalserialLogsFileCmd1805015026/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 logs --file /tmp/TestFunctionalserialLogsFileCmd1805015026/001/logs.txt: (1.156199821s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-622176 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-622176
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-622176: exit status 115 (310.430105ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.61:31317 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-622176 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-622176 delete -f testdata/invalidsvc.yaml: (1.654811723s)
--- PASS: TestFunctional/serial/InvalidService (5.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 config get cpus: exit status 14 (85.683372ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 config get cpus: exit status 14 (67.825434ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-622176 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-622176 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 697759: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-622176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (162.260616ms)

                                                
                                                
-- stdout --
	* [functional-622176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:39:33.950635  697481 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:39:33.950921  697481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:39:33.950932  697481 out.go:309] Setting ErrFile to fd 2...
	I1218 11:39:33.950936  697481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:39:33.951138  697481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:39:33.951752  697481 out.go:303] Setting JSON to false
	I1218 11:39:33.952810  697481 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12120,"bootTime":1702887454,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:39:33.952889  697481 start.go:138] virtualization: kvm guest
	I1218 11:39:33.955334  697481 out.go:177] * [functional-622176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1218 11:39:33.957221  697481 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:39:33.957298  697481 notify.go:220] Checking for updates...
	I1218 11:39:33.958959  697481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:39:33.960544  697481 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:39:33.962186  697481 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:39:33.963792  697481 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 11:39:33.965321  697481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:39:33.967260  697481 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:39:33.967666  697481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:39:33.967750  697481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:39:33.983260  697481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I1218 11:39:33.983723  697481 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:39:33.984329  697481 main.go:141] libmachine: Using API Version  1
	I1218 11:39:33.984358  697481 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:39:33.984774  697481 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:39:33.984950  697481 main.go:141] libmachine: (functional-622176) Calling .DriverName
	I1218 11:39:33.985211  697481 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:39:33.985508  697481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:39:33.985547  697481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:39:34.002480  697481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I1218 11:39:34.002932  697481 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:39:34.003500  697481 main.go:141] libmachine: Using API Version  1
	I1218 11:39:34.003529  697481 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:39:34.003875  697481 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:39:34.004070  697481 main.go:141] libmachine: (functional-622176) Calling .DriverName
	I1218 11:39:34.043014  697481 out.go:177] * Using the kvm2 driver based on existing profile
	I1218 11:39:34.044596  697481 start.go:298] selected driver: kvm2
	I1218 11:39:34.044623  697481 start.go:902] validating driver "kvm2" against &{Name:functional-622176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-622176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:39:34.044766  697481 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:39:34.047142  697481 out.go:177] 
	W1218 11:39:34.048977  697481 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1218 11:39:34.050822  697481 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-622176 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.361649ms)

                                                
                                                
-- stdout --
	* [functional-622176] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:39:33.795688  697453 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:39:33.795826  697453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:39:33.795835  697453 out.go:309] Setting ErrFile to fd 2...
	I1218 11:39:33.795839  697453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:39:33.796169  697453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:39:33.796725  697453 out.go:303] Setting JSON to false
	I1218 11:39:33.797743  697453 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12120,"bootTime":1702887454,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1218 11:39:33.797809  697453 start.go:138] virtualization: kvm guest
	I1218 11:39:33.800338  697453 out.go:177] * [functional-622176] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1218 11:39:33.802415  697453 out.go:177]   - MINIKUBE_LOCATION=17824
	I1218 11:39:33.803890  697453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1218 11:39:33.802484  697453 notify.go:220] Checking for updates...
	I1218 11:39:33.805593  697453 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	I1218 11:39:33.807082  697453 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	I1218 11:39:33.808738  697453 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1218 11:39:33.810290  697453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1218 11:39:33.812118  697453 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:39:33.812522  697453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:39:33.812593  697453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:39:33.828207  697453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I1218 11:39:33.828582  697453 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:39:33.829162  697453 main.go:141] libmachine: Using API Version  1
	I1218 11:39:33.829188  697453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:39:33.829534  697453 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:39:33.829732  697453 main.go:141] libmachine: (functional-622176) Calling .DriverName
	I1218 11:39:33.830023  697453 driver.go:392] Setting default libvirt URI to qemu:///system
	I1218 11:39:33.830320  697453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:39:33.830379  697453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:39:33.845383  697453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1218 11:39:33.845849  697453 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:39:33.846342  697453 main.go:141] libmachine: Using API Version  1
	I1218 11:39:33.846370  697453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:39:33.846698  697453 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:39:33.846884  697453 main.go:141] libmachine: (functional-622176) Calling .DriverName
	I1218 11:39:33.880992  697453 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1218 11:39:33.882661  697453 start.go:298] selected driver: kvm2
	I1218 11:39:33.882683  697453 start.go:902] validating driver "kvm2" against &{Name:functional-622176 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17765/minikube-v1.32.1-1702490427-17765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702585004-17765@sha256:ba2fbb9efd5b81a443834ba0800f3bc13feea942ce199df74b0054a9bdb32bbd Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-622176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.61 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1218 11:39:33.882858  697453 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1218 11:39:33.885352  697453 out.go:177] 
	W1218 11:39:33.886805  697453 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1218 11:39:33.888369  697453 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-622176 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-622176 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xgplr" [2bb72f61-0185-401e-94fd-011c652d1435] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xgplr" [2bb72f61-0185-401e-94fd-011c652d1435] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 30.0047211s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.61:32532
functional_test.go:1674: http://192.168.39.61:32532: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xgplr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.61:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.61:32532
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (30.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0407201d-3b73-4909-88ac-424eec7f53ae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004959916s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-622176 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-622176 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-622176 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b7e25b22-f70f-480e-b4cd-008dbdb63708] Pending
helpers_test.go:344: "sp-pod" [b7e25b22-f70f-480e-b4cd-008dbdb63708] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1218 11:39:10.586041  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [b7e25b22-f70f-480e-b4cd-008dbdb63708] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.005668497s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-622176 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-622176 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-622176 delete -f testdata/storage-provisioner/pod.yaml: (1.517341064s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-622176 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5771cc44-a8e5-4faa-95fe-1b3fdd17f901] Pending
helpers_test.go:344: "sp-pod" [5771cc44-a8e5-4faa-95fe-1b3fdd17f901] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5771cc44-a8e5-4faa-95fe-1b3fdd17f901] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004784637s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-622176 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh -n functional-622176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cp functional-622176:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3347251423/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh -n functional-622176 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh -n functional-622176 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-622176 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-4cbpt" [50aefe06-9354-4f60-9b9a-26cd92e036c1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-4cbpt" [50aefe06-9354-4f60-9b9a-26cd92e036c1] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004908507s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;": exit status 1 (168.093879ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;": exit status 1 (293.725343ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;": exit status 1 (233.205811ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;": exit status 1 (230.261382ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-622176 exec mysql-859648c796-4cbpt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/690739/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /etc/test/nested/copy/690739/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/690739.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /etc/ssl/certs/690739.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/690739.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /usr/share/ca-certificates/690739.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/6907392.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /etc/ssl/certs/6907392.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/6907392.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /usr/share/ca-certificates/6907392.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-622176 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh "sudo systemctl is-active crio": exit status 1 (271.033633ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-622176 docker-env) && out/minikube-linux-amd64 status -p functional-622176"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-622176 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (29.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-622176 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-622176 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-j9wrd" [63b01874-5637-45e4-840c-f33a22322104] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-j9wrd" [63b01874-5637-45e4-840c-f33a22322104] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 29.005290617s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (29.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service list -o json
functional_test.go:1493: Took "486.626312ms" to run "out/minikube-linux-amd64 -p functional-622176 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.61:30509
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "263.662921ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "62.160104ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "321.890498ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "68.138074ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.61:30509
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdany-port3520070332/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1702899572001252927" to /tmp/TestFunctionalparallelMountCmdany-port3520070332/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1702899572001252927" to /tmp/TestFunctionalparallelMountCmdany-port3520070332/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1702899572001252927" to /tmp/TestFunctionalparallelMountCmdany-port3520070332/001/test-1702899572001252927
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.877148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 18 11:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 18 11:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 18 11:39 test-1702899572001252927
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh cat /mount-9p/test-1702899572001252927
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-622176 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4dc5c7e0-d347-4da7-bc79-22e5113ffc05] Pending
helpers_test.go:344: "busybox-mount" [4dc5c7e0-d347-4da7-bc79-22e5113ffc05] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4dc5c7e0-d347-4da7-bc79-22e5113ffc05] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4dc5c7e0-d347-4da7-bc79-22e5113ffc05] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.00581862s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-622176 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdany-port3520070332/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622176 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-622176
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-622176
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622176 image ls --format short --alsologtostderr:
I1218 11:39:57.915240  698763 out.go:296] Setting OutFile to fd 1 ...
I1218 11:39:57.915362  698763 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.915372  698763 out.go:309] Setting ErrFile to fd 2...
I1218 11:39:57.915376  698763 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.915572  698763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
I1218 11:39:57.916162  698763 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.916261  698763 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.916606  698763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.916658  698763 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.931659  698763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
I1218 11:39:57.932081  698763 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.932828  698763 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.932857  698763 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.933372  698763 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.933599  698763 main.go:141] libmachine: (functional-622176) Calling .GetState
I1218 11:39:57.935594  698763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.935664  698763 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.951779  698763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
I1218 11:39:57.952159  698763 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.952706  698763 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.952737  698763 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.953106  698763 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.953278  698763 main.go:141] libmachine: (functional-622176) Calling .DriverName
I1218 11:39:57.953505  698763 ssh_runner.go:195] Run: systemctl --version
I1218 11:39:57.953552  698763 main.go:141] libmachine: (functional-622176) Calling .GetSSHHostname
I1218 11:39:57.956459  698763 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.956910  698763 main.go:141] libmachine: (functional-622176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:f9:9c", ip: ""} in network mk-functional-622176: {Iface:virbr1 ExpiryTime:2023-12-18 12:35:53 +0000 UTC Type:0 Mac:52:54:00:fb:f9:9c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:functional-622176 Clientid:01:52:54:00:fb:f9:9c}
I1218 11:39:57.956983  698763 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined IP address 192.168.39.61 and MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.957197  698763 main.go:141] libmachine: (functional-622176) Calling .GetSSHPort
I1218 11:39:57.957390  698763 main.go:141] libmachine: (functional-622176) Calling .GetSSHKeyPath
I1218 11:39:57.957560  698763 main.go:141] libmachine: (functional-622176) Calling .GetSSHUsername
I1218 11:39:57.957715  698763 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/functional-622176/id_rsa Username:docker}
I1218 11:39:58.080093  698763 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1218 11:39:58.146170  698763 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.146187  698763 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.146514  698763 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.146541  698763 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:39:58.146557  698763 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.146562  698763 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:39:58.146569  698763 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.146893  698763 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.146926  698763 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:39:58.146904  698763 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622176 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-622176 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-622176 | 4061d7cdd8d62 | 30B    |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622176 image ls --format table --alsologtostderr:
I1218 11:39:57.907962  698762 out.go:296] Setting OutFile to fd 1 ...
I1218 11:39:57.909367  698762 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.909435  698762 out.go:309] Setting ErrFile to fd 2...
I1218 11:39:57.909457  698762 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.910316  698762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
I1218 11:39:57.911882  698762 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.912087  698762 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.912637  698762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.912697  698762 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.927877  698762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
I1218 11:39:57.928730  698762 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.929466  698762 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.929500  698762 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.929900  698762 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.930099  698762 main.go:141] libmachine: (functional-622176) Calling .GetState
I1218 11:39:57.932160  698762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.932220  698762 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.947565  698762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
I1218 11:39:57.948026  698762 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.948534  698762 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.948560  698762 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.948855  698762 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.949088  698762 main.go:141] libmachine: (functional-622176) Calling .DriverName
I1218 11:39:57.949280  698762 ssh_runner.go:195] Run: systemctl --version
I1218 11:39:57.949308  698762 main.go:141] libmachine: (functional-622176) Calling .GetSSHHostname
I1218 11:39:57.952633  698762 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.953252  698762 main.go:141] libmachine: (functional-622176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:f9:9c", ip: ""} in network mk-functional-622176: {Iface:virbr1 ExpiryTime:2023-12-18 12:35:53 +0000 UTC Type:0 Mac:52:54:00:fb:f9:9c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:functional-622176 Clientid:01:52:54:00:fb:f9:9c}
I1218 11:39:57.953348  698762 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined IP address 192.168.39.61 and MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.953644  698762 main.go:141] libmachine: (functional-622176) Calling .GetSSHPort
I1218 11:39:57.953838  698762 main.go:141] libmachine: (functional-622176) Calling .GetSSHKeyPath
I1218 11:39:57.953955  698762 main.go:141] libmachine: (functional-622176) Calling .GetSSHUsername
I1218 11:39:57.954185  698762 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/functional-622176/id_rsa Username:docker}
I1218 11:39:58.088169  698762 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1218 11:39:58.169594  698762 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.169612  698762 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.169896  698762 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.169919  698762 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:39:58.169929  698762 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.169938  698762 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.170273  698762 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.170280  698762 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:39:58.170288  698762 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622176 image ls --format json --alsologtostderr:
[{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"4061d7cdd8d6251657d2eb27f5e88758e588bc2c393a308f01e3a6e2f8776080","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-622176"],"size":"30"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afc
d344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/d
ashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-622176"],"size":"32900000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1
"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622176 image ls --format json --alsologtostderr:
I1218 11:39:57.900614  698760 out.go:296] Setting OutFile to fd 1 ...
I1218 11:39:57.900782  698760 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.900793  698760 out.go:309] Setting ErrFile to fd 2...
I1218 11:39:57.900800  698760 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.901013  698760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
I1218 11:39:57.901709  698760 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.901837  698760 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.902284  698760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.902356  698760 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.919165  698760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
I1218 11:39:57.919733  698760 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.920625  698760 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.920659  698760 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.921038  698760 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.921293  698760 main.go:141] libmachine: (functional-622176) Calling .GetState
I1218 11:39:57.923644  698760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.923702  698760 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.939764  698760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
I1218 11:39:57.940258  698760 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.940711  698760 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.940738  698760 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.941089  698760 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.941292  698760 main.go:141] libmachine: (functional-622176) Calling .DriverName
I1218 11:39:57.941471  698760 ssh_runner.go:195] Run: systemctl --version
I1218 11:39:57.941507  698760 main.go:141] libmachine: (functional-622176) Calling .GetSSHHostname
I1218 11:39:57.944495  698760 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.945206  698760 main.go:141] libmachine: (functional-622176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:f9:9c", ip: ""} in network mk-functional-622176: {Iface:virbr1 ExpiryTime:2023-12-18 12:35:53 +0000 UTC Type:0 Mac:52:54:00:fb:f9:9c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:functional-622176 Clientid:01:52:54:00:fb:f9:9c}
I1218 11:39:57.945249  698760 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined IP address 192.168.39.61 and MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.945471  698760 main.go:141] libmachine: (functional-622176) Calling .GetSSHPort
I1218 11:39:57.945660  698760 main.go:141] libmachine: (functional-622176) Calling .GetSSHKeyPath
I1218 11:39:57.945837  698760 main.go:141] libmachine: (functional-622176) Calling .GetSSHUsername
I1218 11:39:57.945962  698760 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/functional-622176/id_rsa Username:docker}
I1218 11:39:58.060410  698760 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1218 11:39:58.125613  698760 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.125631  698760 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.125981  698760 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:39:58.125985  698760 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.126019  698760 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:39:58.126038  698760 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.126053  698760 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.126300  698760 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:39:58.126352  698760 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.126377  698760 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622176 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 4061d7cdd8d6251657d2eb27f5e88758e588bc2c393a308f01e3a6e2f8776080
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-622176
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-622176
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622176 image ls --format yaml --alsologtostderr:
I1218 11:39:57.909473  698761 out.go:296] Setting OutFile to fd 1 ...
I1218 11:39:57.909817  698761 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.909828  698761 out.go:309] Setting ErrFile to fd 2...
I1218 11:39:57.909835  698761 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:57.910087  698761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
I1218 11:39:57.910871  698761 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.911197  698761 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:57.911663  698761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.911742  698761 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.927429  698761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32921
I1218 11:39:57.927972  698761 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.928543  698761 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.928588  698761 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.928933  698761 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.929104  698761 main.go:141] libmachine: (functional-622176) Calling .GetState
I1218 11:39:57.930905  698761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:57.930948  698761 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:57.946523  698761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
I1218 11:39:57.947013  698761 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:57.947480  698761 main.go:141] libmachine: Using API Version  1
I1218 11:39:57.947497  698761 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:57.947878  698761 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:57.948131  698761 main.go:141] libmachine: (functional-622176) Calling .DriverName
I1218 11:39:57.948327  698761 ssh_runner.go:195] Run: systemctl --version
I1218 11:39:57.948357  698761 main.go:141] libmachine: (functional-622176) Calling .GetSSHHostname
I1218 11:39:57.951732  698761 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.952160  698761 main.go:141] libmachine: (functional-622176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:f9:9c", ip: ""} in network mk-functional-622176: {Iface:virbr1 ExpiryTime:2023-12-18 12:35:53 +0000 UTC Type:0 Mac:52:54:00:fb:f9:9c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:functional-622176 Clientid:01:52:54:00:fb:f9:9c}
I1218 11:39:57.952194  698761 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined IP address 192.168.39.61 and MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:57.952288  698761 main.go:141] libmachine: (functional-622176) Calling .GetSSHPort
I1218 11:39:57.952425  698761 main.go:141] libmachine: (functional-622176) Calling .GetSSHKeyPath
I1218 11:39:57.952590  698761 main.go:141] libmachine: (functional-622176) Calling .GetSSHUsername
I1218 11:39:57.952719  698761 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/functional-622176/id_rsa Username:docker}
I1218 11:39:58.054421  698761 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1218 11:39:58.106678  698761 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.106704  698761 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.107010  698761 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:39:58.107025  698761 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.107039  698761 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:39:58.107087  698761 main.go:141] libmachine: Making call to close driver server
I1218 11:39:58.107105  698761 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:39:58.107334  698761 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:39:58.107357  698761 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh pgrep buildkitd: exit status 1 (218.754098ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image build -t localhost/my-image:functional-622176 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image build -t localhost/my-image:functional-622176 testdata/build --alsologtostderr: (3.160226191s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622176 image build -t localhost/my-image:functional-622176 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3262fe5330cc
Removing intermediate container 3262fe5330cc
---> 13376b772d1b
Step 3/3 : ADD content.txt /
---> e5a9f2484ca2
Successfully built e5a9f2484ca2
Successfully tagged localhost/my-image:functional-622176
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622176 image build -t localhost/my-image:functional-622176 testdata/build --alsologtostderr:
I1218 11:39:58.406702  698878 out.go:296] Setting OutFile to fd 1 ...
I1218 11:39:58.406918  698878 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:58.406933  698878 out.go:309] Setting ErrFile to fd 2...
I1218 11:39:58.406941  698878 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1218 11:39:58.407270  698878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
I1218 11:39:58.408278  698878 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:58.408950  698878 config.go:182] Loaded profile config "functional-622176": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1218 11:39:58.409415  698878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:58.409461  698878 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:58.424672  698878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39825
I1218 11:39:58.425270  698878 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:58.425931  698878 main.go:141] libmachine: Using API Version  1
I1218 11:39:58.425961  698878 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:58.426355  698878 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:58.426596  698878 main.go:141] libmachine: (functional-622176) Calling .GetState
I1218 11:39:58.428745  698878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1218 11:39:58.428807  698878 main.go:141] libmachine: Launching plugin server for driver kvm2
I1218 11:39:58.443313  698878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
I1218 11:39:58.443855  698878 main.go:141] libmachine: () Calling .GetVersion
I1218 11:39:58.444464  698878 main.go:141] libmachine: Using API Version  1
I1218 11:39:58.444498  698878 main.go:141] libmachine: () Calling .SetConfigRaw
I1218 11:39:58.444856  698878 main.go:141] libmachine: () Calling .GetMachineName
I1218 11:39:58.445056  698878 main.go:141] libmachine: (functional-622176) Calling .DriverName
I1218 11:39:58.445365  698878 ssh_runner.go:195] Run: systemctl --version
I1218 11:39:58.445414  698878 main.go:141] libmachine: (functional-622176) Calling .GetSSHHostname
I1218 11:39:58.448645  698878 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:58.449134  698878 main.go:141] libmachine: (functional-622176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:f9:9c", ip: ""} in network mk-functional-622176: {Iface:virbr1 ExpiryTime:2023-12-18 12:35:53 +0000 UTC Type:0 Mac:52:54:00:fb:f9:9c Iaid: IPaddr:192.168.39.61 Prefix:24 Hostname:functional-622176 Clientid:01:52:54:00:fb:f9:9c}
I1218 11:39:58.449170  698878 main.go:141] libmachine: (functional-622176) DBG | domain functional-622176 has defined IP address 192.168.39.61 and MAC address 52:54:00:fb:f9:9c in network mk-functional-622176
I1218 11:39:58.449379  698878 main.go:141] libmachine: (functional-622176) Calling .GetSSHPort
I1218 11:39:58.449577  698878 main.go:141] libmachine: (functional-622176) Calling .GetSSHKeyPath
I1218 11:39:58.449762  698878 main.go:141] libmachine: (functional-622176) Calling .GetSSHUsername
I1218 11:39:58.449957  698878 sshutil.go:53] new ssh client: &{IP:192.168.39.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/functional-622176/id_rsa Username:docker}
I1218 11:39:58.541244  698878 build_images.go:151] Building image from path: /tmp/build.3496643157.tar
I1218 11:39:58.541328  698878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1218 11:39:58.556931  698878 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3496643157.tar
I1218 11:39:58.568435  698878 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3496643157.tar: stat -c "%s %y" /var/lib/minikube/build/build.3496643157.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3496643157.tar': No such file or directory
I1218 11:39:58.568478  698878 ssh_runner.go:362] scp /tmp/build.3496643157.tar --> /var/lib/minikube/build/build.3496643157.tar (3072 bytes)
I1218 11:39:58.597849  698878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3496643157
I1218 11:39:58.608073  698878 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3496643157 -xf /var/lib/minikube/build/build.3496643157.tar
I1218 11:39:58.621858  698878 docker.go:346] Building image: /var/lib/minikube/build/build.3496643157
I1218 11:39:58.621960  698878 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-622176 /var/lib/minikube/build/build.3496643157
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1218 11:40:01.466527  698878 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-622176 /var/lib/minikube/build/build.3496643157: (2.844530104s)
I1218 11:40:01.466598  698878 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3496643157
I1218 11:40:01.478049  698878 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3496643157.tar
I1218 11:40:01.488185  698878 build_images.go:207] Built localhost/my-image:functional-622176 from /tmp/build.3496643157.tar
I1218 11:40:01.488229  698878 build_images.go:123] succeeded building to: functional-622176
I1218 11:40:01.488236  698878 build_images.go:124] failed building to: 
I1218 11:40:01.488316  698878 main.go:141] libmachine: Making call to close driver server
I1218 11:40:01.488337  698878 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:40:01.488694  698878 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:40:01.488756  698878 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:40:01.488725  698878 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
I1218 11:40:01.488771  698878 main.go:141] libmachine: Making call to close driver server
I1218 11:40:01.488801  698878 main.go:141] libmachine: (functional-622176) Calling .Close
I1218 11:40:01.489035  698878 main.go:141] libmachine: Successfully made call to close driver server
I1218 11:40:01.489051  698878 main.go:141] libmachine: Making call to close connection to plugin binary
I1218 11:40:01.489086  698878 main.go:141] libmachine: (functional-622176) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.091474624s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-622176
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr: (4.979691823s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdspecific-port2084624841/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.690354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdspecific-port2084624841/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622176 ssh "sudo umount -f /mount-9p": exit status 1 (222.495481ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-622176 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdspecific-port2084624841/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-622176 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622176 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1696422989/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr: (2.424516143s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/12/18 11:39:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.167224611s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-622176
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image load --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr: (3.503075648s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image save gcr.io/google-containers/addon-resizer:functional-622176 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image save gcr.io/google-containers/addon-resizer:functional-622176 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.210344885s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image rm gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.214909805s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-622176
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-622176 image save --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-622176 image save --daemon gcr.io/google-containers/addon-resizer:functional-622176 --alsologtostderr: (1.367235604s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-622176
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-622176
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-622176
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-622176
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (343.91s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-428071 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-428071 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m45.848680442s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-428071 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-428071 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.142136044s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-428071 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-428071 addons enable gvisor: (4.999063141s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2ae075f1-3020-4806-902c-3a80b60e354b] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006203729s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-428071 replace --force -f testdata/nginx-gvisor.yaml
E1218 12:08:59.417882  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [b62655c1-ded6-437a-aea8-3e455c0694c5] Pending
helpers_test.go:344: "nginx-gvisor" [b62655c1-ded6-437a-aea8-3e455c0694c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [b62655c1-ded6-437a-aea8-3e455c0694c5] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.004972009s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-428071
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-428071: (1m31.839152855s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-428071 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1218 12:10:51.711060  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 12:11:21.904490  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:21.909794  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:21.920123  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:21.940509  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:21.980833  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:22.061197  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:22.221714  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:22.542046  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:23.183023  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:24.464617  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:27.025431  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:11:32.146257  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-428071 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m23.651966688s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [2ae075f1-3020-4806-902c-3a80b60e354b] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006585004s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [b62655c1-ded6-437a-aea8-3e455c0694c5] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.006122924s
helpers_test.go:175: Cleaning up "gvisor-428071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-428071
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-428071: (1.136837327s)
--- PASS: TestGvisorAddon (343.91s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (54.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-878771 --driver=kvm2 
E1218 11:40:32.507285  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-878771 --driver=kvm2 : (54.083821169s)
--- PASS: TestImageBuild/serial/Setup (54.08s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-878771
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-878771: (2.548914096s)
--- PASS: TestImageBuild/serial/NormalBuild (2.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-878771
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-878771: (1.445270476s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-878771
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-878771
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-119818 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-119818 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m34.538204787s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons enable ingress --alsologtostderr -v=5
E1218 11:42:48.660105  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons enable ingress --alsologtostderr -v=5: (18.490409305s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.49s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (49.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-119818 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-119818 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.445333982s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-119818 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-119818 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1a30f925-1d69-45d3-8263-ca67ce409ed9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1218 11:43:16.347993  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
helpers_test.go:344: "nginx" [1a30f925-1d69-45d3-8263-ca67ce409ed9] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.005684652s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-119818 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.100
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons disable ingress-dns --alsologtostderr -v=1: (12.93234933s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-119818 addons disable ingress --alsologtostderr -v=1: (7.472064435s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (49.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-350684 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1218 11:43:59.417960  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.423275  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.433541  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.453818  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.494137  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.574507  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:43:59.734927  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:00.055570  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:00.696536  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:01.977045  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:04.538914  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:09.659215  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:19.900065  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:44:40.380494  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-350684 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m9.400772627s)
--- PASS: TestJSONOutput/start/Command (69.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-350684 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-350684 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-350684 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-350684 --output=json --user=testUser: (8.115215086s)
--- PASS: TestJSONOutput/stop/Command (8.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-350258 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-350258 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (82.022196ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8db3504-daa6-407d-bd18-7370af405ad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-350258] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b61a090-9273-4633-9ece-354d33cfb39a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17824"}}
	{"specversion":"1.0","id":"26e6e96f-9125-4068-8d72-b85576d7858f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d91ea921-b882-43d2-8d84-a5f3d8f31660","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig"}}
	{"specversion":"1.0","id":"93cc5f89-8fdd-43d3-b389-90bf7b2446ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube"}}
	{"specversion":"1.0","id":"f6e48d22-0211-4448-b2a9-afc7f7ab550b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fce36aac-0a0f-4658-8c12-60554f3b735e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b518d4c5-1089-4f7b-bb87-a3ebba7c22fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-350258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-350258
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (105.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-156118 --driver=kvm2 
E1218 11:45:21.341718  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-156118 --driver=kvm2 : (49.98687329s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-158785 --driver=kvm2 
E1218 11:46:43.262888  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-158785 --driver=kvm2 : (52.496597205s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-156118
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-158785
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-158785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-158785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-158785: (1.024701755s)
helpers_test.go:175: Cleaning up "first-156118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-156118
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-156118: (1.000159697s)
--- PASS: TestMinikubeProfile (105.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-403081 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-403081 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.906320084s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-403081 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-403081 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-422302 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1218 11:47:48.659880  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-422302 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.20020848s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-403081 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-422302
E1218 11:47:56.316949  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.322423  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.332694  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.352990  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.393280  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.473605  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:56.634076  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-422302: (2.096899134s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-422302
E1218 11:47:56.954208  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:57.595084  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:47:58.876149  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:48:01.437988  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:48:06.558678  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:48:16.798950  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-422302: (25.832622559s)
--- PASS: TestMountStart/serial/RestartStopped (26.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-422302 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107476 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1218 11:48:37.279206  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:48:59.417946  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 11:49:18.240036  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 11:49:27.103585  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107476 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m9.982241235s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- rollout status deployment/busybox
E1218 11:50:40.160300  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-107476 -- rollout status deployment/busybox: (5.116067887s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-8dg4d -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-sjq8b -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-8dg4d -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-sjq8b -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-8dg4d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-sjq8b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-8dg4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-8dg4d -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-sjq8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-107476 -- exec busybox-5bc68d56bd-sjq8b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-107476 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-107476 -v 3 --alsologtostderr: (47.100961569s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-107476 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp testdata/cp-test.txt multinode-107476:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476:/home/docker/cp-test.txt multinode-107476-m02:/home/docker/cp-test_multinode-107476_multinode-107476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test_multinode-107476_multinode-107476-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476:/home/docker/cp-test.txt multinode-107476-m03:/home/docker/cp-test_multinode-107476_multinode-107476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test_multinode-107476_multinode-107476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp testdata/cp-test.txt multinode-107476-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt multinode-107476:/home/docker/cp-test_multinode-107476-m02_multinode-107476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test_multinode-107476-m02_multinode-107476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m02:/home/docker/cp-test.txt multinode-107476-m03:/home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test_multinode-107476-m02_multinode-107476-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp testdata/cp-test.txt multinode-107476-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2135286047/001/cp-test_multinode-107476-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt multinode-107476:/home/docker/cp-test_multinode-107476-m03_multinode-107476.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476 "sudo cat /home/docker/cp-test_multinode-107476-m03_multinode-107476.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 cp multinode-107476-m03:/home/docker/cp-test.txt multinode-107476-m02:/home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 ssh -n multinode-107476-m02 "sudo cat /home/docker/cp-test_multinode-107476-m03_multinode-107476-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-107476 node stop m03: (3.099322662s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107476 status: exit status 7 (459.780177ms)

                                                
                                                
-- stdout --
	multinode-107476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107476-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107476-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr: exit status 7 (451.520194ms)

                                                
                                                
-- stdout --
	multinode-107476
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-107476-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-107476-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:51:42.790891  705930 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:51:42.790996  705930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:51:42.791004  705930 out.go:309] Setting ErrFile to fd 2...
	I1218 11:51:42.791009  705930 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:51:42.791179  705930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:51:42.791348  705930 out.go:303] Setting JSON to false
	I1218 11:51:42.791388  705930 mustload.go:65] Loading cluster: multinode-107476
	I1218 11:51:42.791425  705930 notify.go:220] Checking for updates...
	I1218 11:51:42.791870  705930 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:51:42.791891  705930 status.go:255] checking status of multinode-107476 ...
	I1218 11:51:42.792445  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:42.792539  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:42.812912  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35979
	I1218 11:51:42.813435  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:42.814061  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:42.814096  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:42.814481  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:42.814662  705930 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:51:42.816333  705930 status.go:330] multinode-107476 host status = "Running" (err=<nil>)
	I1218 11:51:42.816355  705930 host.go:66] Checking if "multinode-107476" exists ...
	I1218 11:51:42.816627  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:42.816659  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:42.831224  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1218 11:51:42.831609  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:42.832129  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:42.832158  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:42.832540  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:42.832716  705930 main.go:141] libmachine: (multinode-107476) Calling .GetIP
	I1218 11:51:42.835581  705930 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:51:42.836072  705930 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:48:40 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:51:42.836126  705930 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:51:42.836265  705930 host.go:66] Checking if "multinode-107476" exists ...
	I1218 11:51:42.836559  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:42.836608  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:42.851806  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I1218 11:51:42.852212  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:42.852587  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:42.852601  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:42.852929  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:42.853091  705930 main.go:141] libmachine: (multinode-107476) Calling .DriverName
	I1218 11:51:42.853301  705930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 11:51:42.853324  705930 main.go:141] libmachine: (multinode-107476) Calling .GetSSHHostname
	I1218 11:51:42.856160  705930 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:51:42.856635  705930 main.go:141] libmachine: (multinode-107476) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:59:cb", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:48:40 +0000 UTC Type:0 Mac:52:54:00:4e:59:cb Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:multinode-107476 Clientid:01:52:54:00:4e:59:cb}
	I1218 11:51:42.856662  705930 main.go:141] libmachine: (multinode-107476) DBG | domain multinode-107476 has defined IP address 192.168.39.124 and MAC address 52:54:00:4e:59:cb in network mk-multinode-107476
	I1218 11:51:42.856880  705930 main.go:141] libmachine: (multinode-107476) Calling .GetSSHPort
	I1218 11:51:42.857077  705930 main.go:141] libmachine: (multinode-107476) Calling .GetSSHKeyPath
	I1218 11:51:42.857243  705930 main.go:141] libmachine: (multinode-107476) Calling .GetSSHUsername
	I1218 11:51:42.857413  705930 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476/id_rsa Username:docker}
	I1218 11:51:42.947182  705930 ssh_runner.go:195] Run: systemctl --version
	I1218 11:51:42.953013  705930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:51:42.968312  705930 kubeconfig.go:92] found "multinode-107476" server: "https://192.168.39.124:8443"
	I1218 11:51:42.968347  705930 api_server.go:166] Checking apiserver status ...
	I1218 11:51:42.968382  705930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1218 11:51:42.980808  705930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1890/cgroup
	I1218 11:51:42.989830  705930 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/podd249aa06177557dc7c27cc4c9fd3f8c4/9226aa8cd1e990647164e2a20291f22e7512cebcdd08566af6cced9c9cb2d1b9"
	I1218 11:51:42.989889  705930 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podd249aa06177557dc7c27cc4c9fd3f8c4/9226aa8cd1e990647164e2a20291f22e7512cebcdd08566af6cced9c9cb2d1b9/freezer.state
	I1218 11:51:42.999189  705930 api_server.go:204] freezer state: "THAWED"
	I1218 11:51:42.999218  705930 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I1218 11:51:43.004927  705930 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I1218 11:51:43.004956  705930 status.go:421] multinode-107476 apiserver status = Running (err=<nil>)
	I1218 11:51:43.004969  705930 status.go:257] multinode-107476 status: &{Name:multinode-107476 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 11:51:43.004996  705930 status.go:255] checking status of multinode-107476-m02 ...
	I1218 11:51:43.005405  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:43.005445  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:43.020287  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1218 11:51:43.020706  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:43.021201  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:43.021224  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:43.021539  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:43.021785  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:51:43.023509  705930 status.go:330] multinode-107476-m02 host status = "Running" (err=<nil>)
	I1218 11:51:43.023526  705930 host.go:66] Checking if "multinode-107476-m02" exists ...
	I1218 11:51:43.023929  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:43.023979  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:43.038525  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I1218 11:51:43.038964  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:43.039415  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:43.039439  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:43.039888  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:43.040070  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetIP
	I1218 11:51:43.042886  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:51:43.043300  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:51:43.043322  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:51:43.043494  705930 host.go:66] Checking if "multinode-107476-m02" exists ...
	I1218 11:51:43.043838  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:43.043878  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:43.058195  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I1218 11:51:43.058640  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:43.059142  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:43.059175  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:43.059516  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:43.059719  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .DriverName
	I1218 11:51:43.059953  705930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1218 11:51:43.059982  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHHostname
	I1218 11:51:43.063071  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:51:43.063545  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:62:9b", ip: ""} in network mk-multinode-107476: {Iface:virbr1 ExpiryTime:2023-12-18 12:49:59 +0000 UTC Type:0 Mac:52:54:00:66:62:9b Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:multinode-107476-m02 Clientid:01:52:54:00:66:62:9b}
	I1218 11:51:43.063569  705930 main.go:141] libmachine: (multinode-107476-m02) DBG | domain multinode-107476-m02 has defined IP address 192.168.39.238 and MAC address 52:54:00:66:62:9b in network mk-multinode-107476
	I1218 11:51:43.063805  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHPort
	I1218 11:51:43.063994  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHKeyPath
	I1218 11:51:43.064125  705930 main.go:141] libmachine: (multinode-107476-m02) Calling .GetSSHUsername
	I1218 11:51:43.064283  705930 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17824-683489/.minikube/machines/multinode-107476-m02/id_rsa Username:docker}
	I1218 11:51:43.147586  705930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1218 11:51:43.160510  705930 status.go:257] multinode-107476-m02 status: &{Name:multinode-107476-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1218 11:51:43.160569  705930 status.go:255] checking status of multinode-107476-m03 ...
	I1218 11:51:43.160925  705930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:51:43.160983  705930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:51:43.176588  705930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42119
	I1218 11:51:43.177057  705930 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:51:43.177544  705930 main.go:141] libmachine: Using API Version  1
	I1218 11:51:43.177568  705930 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:51:43.177965  705930 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:51:43.178225  705930 main.go:141] libmachine: (multinode-107476-m03) Calling .GetState
	I1218 11:51:43.179840  705930 status.go:330] multinode-107476-m03 host status = "Stopped" (err=<nil>)
	I1218 11:51:43.179857  705930 status.go:343] host is not running, skipping remaining checks
	I1218 11:51:43.179864  705930 status.go:257] multinode-107476-m03 status: &{Name:multinode-107476-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-107476 node start m03 --alsologtostderr: (31.749654292s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (111.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-107476 stop: (1m51.575841279s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107476 status: exit status 7 (104.139185ms)

                                                
                                                
-- stdout --
	multinode-107476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-107476-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr: exit status 7 (100.05734ms)

                                                
                                                
-- stdout --
	multinode-107476
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-107476-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1218 11:56:07.779485  707453 out.go:296] Setting OutFile to fd 1 ...
	I1218 11:56:07.779657  707453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:56:07.779668  707453 out.go:309] Setting ErrFile to fd 2...
	I1218 11:56:07.779674  707453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1218 11:56:07.779873  707453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17824-683489/.minikube/bin
	I1218 11:56:07.780075  707453 out.go:303] Setting JSON to false
	I1218 11:56:07.780123  707453 mustload.go:65] Loading cluster: multinode-107476
	I1218 11:56:07.780181  707453 notify.go:220] Checking for updates...
	I1218 11:56:07.780566  707453 config.go:182] Loaded profile config "multinode-107476": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1218 11:56:07.780583  707453 status.go:255] checking status of multinode-107476 ...
	I1218 11:56:07.781015  707453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:56:07.781093  707453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:56:07.797338  707453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37787
	I1218 11:56:07.797808  707453 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:56:07.798386  707453 main.go:141] libmachine: Using API Version  1
	I1218 11:56:07.798411  707453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:56:07.798883  707453 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:56:07.799083  707453 main.go:141] libmachine: (multinode-107476) Calling .GetState
	I1218 11:56:07.800767  707453 status.go:330] multinode-107476 host status = "Stopped" (err=<nil>)
	I1218 11:56:07.800781  707453 status.go:343] host is not running, skipping remaining checks
	I1218 11:56:07.800786  707453 status.go:257] multinode-107476 status: &{Name:multinode-107476 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1218 11:56:07.800807  707453 status.go:255] checking status of multinode-107476-m02 ...
	I1218 11:56:07.801105  707453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1218 11:56:07.801140  707453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1218 11:56:07.815849  707453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I1218 11:56:07.816279  707453 main.go:141] libmachine: () Calling .GetVersion
	I1218 11:56:07.816853  707453 main.go:141] libmachine: Using API Version  1
	I1218 11:56:07.816889  707453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1218 11:56:07.817220  707453 main.go:141] libmachine: () Calling .GetMachineName
	I1218 11:56:07.817386  707453 main.go:141] libmachine: (multinode-107476-m02) Calling .GetState
	I1218 11:56:07.818873  707453 status.go:330] multinode-107476-m02 host status = "Stopped" (err=<nil>)
	I1218 11:56:07.818893  707453 status.go:343] host is not running, skipping remaining checks
	I1218 11:56:07.818902  707453 status.go:257] multinode-107476-m02 status: &{Name:multinode-107476-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (111.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107476 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1218 11:57:48.659848  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107476 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m45.659334411s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-107476 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.22s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-107476
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107476-m02 --driver=kvm2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-107476-m02 --driver=kvm2 : exit status 14 (83.601756ms)

                                                
                                                
-- stdout --
	* [multinode-107476-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-107476-m02' is duplicated with machine name 'multinode-107476-m02' in profile 'multinode-107476'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-107476-m03 --driver=kvm2 
E1218 11:57:56.316887  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-107476-m03 --driver=kvm2 : (53.038305095s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-107476
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-107476: exit status 80 (269.987828ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-107476
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-107476-m03 already exists in multinode-107476-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-107476-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.26s)

                                                
                                    
x
+
TestPreload (190.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1218 11:58:59.418117  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 12:00:22.463904  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m32.777592631s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741947 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-741947 image pull gcr.io/k8s-minikube/busybox: (2.096060674s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-741947
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-741947: (13.120170658s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741947 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741947 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m21.116999955s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741947 image list
helpers_test.go:175: Cleaning up "test-preload-741947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-741947
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-741947: (1.071078433s)
--- PASS: TestPreload (190.40s)

                                                
                                    
x
+
TestScheduledStopUnix (123.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-537412 --memory=2048 --driver=kvm2 
E1218 12:02:48.660533  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-537412 --memory=2048 --driver=kvm2 : (52.090134041s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537412 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-537412 -n scheduled-stop-537412
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537412 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537412 --cancel-scheduled
E1218 12:02:56.316443  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537412 -n scheduled-stop-537412
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-537412
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537412 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1218 12:03:59.417703  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-537412
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-537412: exit status 7 (87.108216ms)

                                                
                                                
-- stdout --
	scheduled-stop-537412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537412 -n scheduled-stop-537412
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537412 -n scheduled-stop-537412: exit status 7 (83.09883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-537412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-537412
--- PASS: TestScheduledStopUnix (123.95s)

                                                
                                    
x
+
TestSkaffold (148.01s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2812508460 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-010862 --memory=2600 --driver=kvm2 
E1218 12:04:19.361765  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-010862 --memory=2600 --driver=kvm2 : (51.733393307s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2812508460 run --minikube-profile skaffold-010862 --kube-context skaffold-010862 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2812508460 run --minikube-profile skaffold-010862 --kube-context skaffold-010862 --status-check=true --port-forward=false --interactive=false: (1m21.131224676s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-75c5d4bbc8-79pnj" [24685431-6548-4349-874f-d4bc1f9c8054] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004219627s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-98f7dd779-xx6gt" [4f308b39-e67d-4f59-99e3-51b62af34b06] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004268671s
helpers_test.go:175: Cleaning up "skaffold-010862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-010862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-010862: (1.255920311s)
--- PASS: TestSkaffold (148.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (242.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.4043603277.exe start -p running-upgrade-334175 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.4043603277.exe start -p running-upgrade-334175 --memory=2200 --vm-driver=kvm2 : (1m57.148070838s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-334175 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-334175 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (2m1.501034157s)
helpers_test.go:175: Cleaning up "running-upgrade-334175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-334175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-334175: (1.533875686s)
--- PASS: TestRunningBinaryUpgrade (242.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (189.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m11.436853423s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-888550
E1218 12:11:42.386624  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-888550: (13.132069116s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-888550 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-888550 status --format={{.Host}}: exit status 7 (86.724053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
E1218 12:12:02.866913  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (48.817229765s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-888550 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (109.790717ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-888550] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-888550
	    minikube start -p kubernetes-upgrade-888550 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8885502 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-888550 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
E1218 12:12:43.827248  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:12:48.660341  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-888550 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (54.716328791s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-888550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-888550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-888550: (1.16838472s)
--- PASS: TestKubernetesUpgrade (189.54s)

                                                
                                    
x
+
TestPause/serial/Start (125.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327451 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-327451 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m5.510282713s)
--- PASS: TestPause/serial/Start (125.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-327451 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-327451 --alsologtostderr -v=1 --driver=kvm2 : (45.61314418s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (87.528124ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-946403] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17824
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17824-683489/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17824-683489/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (64.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946403 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946403 --driver=kvm2 : (1m3.829841662s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-946403 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (64.29s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327451 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.85s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-327451 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-327451 --output=json --layout=cluster: exit status 2 (852.289548ms)

                                                
                                                
-- stdout --
	{"Name":"pause-327451","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-327451","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.85s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-327451 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.92s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-327451 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.92s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-327451 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-327451 --alsologtostderr -v=5: (1.230310181s)
--- PASS: TestPause/serial/DeletePaused (1.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (247.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.391461111.exe start -p stopped-upgrade-448558 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.391461111.exe start -p stopped-upgrade-448558 --memory=2200 --vm-driver=kvm2 : (2m13.112619826s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.391461111.exe -p stopped-upgrade-448558 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.391461111.exe -p stopped-upgrade-448558 stop: (13.445069696s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-448558 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-448558 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m41.419989174s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (247.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.198337679s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m41.271765484s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (135.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1218 12:13:53.201482  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.206809  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.217142  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.237437  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.277716  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.358200  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.519337  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:53.840328  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:54.480773  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:55.761427  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:58.322425  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:13:59.418603  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 12:14:03.442861  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:14:05.747976  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:14:13.683528  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (2m15.429822748s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (135.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --driver=kvm2 
E1218 12:14:34.163777  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --driver=kvm2 : (32.682944851s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-946403 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-946403 status -o json: exit status 2 (317.698573ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-946403","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-946403
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-946403: (1.314286653s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (36.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --driver=kvm2 
E1218 12:15:15.124360  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946403 --no-kubernetes --driver=kvm2 : (36.201306372s)
--- PASS: TestNoKubernetes/serial/Start (36.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vkj6f" [51675033-2289-4916-a55e-0e309289051e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vkj6f" [51675033-2289-4916-a55e-0e309289051e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.005340356s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-946403 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-946403 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.478664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-946403
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-946403: (2.478450867s)
--- PASS: TestNoKubernetes/serial/Stop (2.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-946403 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-946403 --driver=kvm2 : (25.607733644s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m59.130883799s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n2wgg" [22c086ff-0987-4ea0-879d-ec0c612ae49a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007779677s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z9gtn" [83b97f10-8751-4625-a050-011899b1c92b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z9gtn" [83b97f10-8751-4625-a050-011899b1c92b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007381749s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-946403 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-946403 "sudo systemctl is-active --quiet service kubelet": exit status 1 (249.448619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (144.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1218 12:16:21.904706  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m24.470083287s)
--- PASS: TestNetworkPlugins/group/false/Start (144.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (161.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1218 12:16:37.044904  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:16:49.588883  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:17:02.464824  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m41.28730401s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (161.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-448558
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-448558: (1.457735076s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (120.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E1218 12:17:48.660609  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (2m0.398129164s)
--- PASS: TestNetworkPlugins/group/flannel/Start (120.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gsjgj" [7f64aaf7-5df9-49da-b82e-14aa3db1bdf8] Running
E1218 12:17:56.316633  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.105070972s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lc8c9" [b1c5e390-577a-4ed9-8c53-684b37a143f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lc8c9" [b1c5e390-577a-4ed9-8c53-684b37a143f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005583518s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m35.512625229s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mfbn5" [76342a40-ff55-47a1-8e7e-21491cff0237] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mfbn5" [76342a40-ff55-47a1-8e7e-21491cff0237] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00552044s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (91.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m31.408117526s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (91.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kpf8t" [aa70ce90-bcb2-4271-bcb4-56940fc19aac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kpf8t" [aa70ce90-bcb2-4271-bcb4-56940fc19aac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006601445s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-568060 exec deployment/netcat -- nslookup kubernetes.default
E1218 12:19:20.886062  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-568060 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m39.481601008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d6sm6" [005999ee-275a-4372-92a7-8d144ec727df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006081701s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x5jr2" [c46bbf5c-646b-4941-ab53-4ddd68346a05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x5jr2" [c46bbf5c-646b-4941-ab53-4ddd68346a05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.013868809s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jg8mk" [92daafff-78fd-4fa6-8961-8ed7331a3890] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jg8mk" [92daafff-78fd-4fa6-8961-8ed7331a3890] Running
E1218 12:20:18.198114  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.203428  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.213723  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.234117  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.274411  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.354965  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.515117  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:18.835731  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:19.475912  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:20.756844  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006315569s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-571040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1218 12:20:28.437415  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:20:38.678693  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-571040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m23.031915059s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pc7vl" [d9f7c494-872d-4437-9900-0beba5e1322a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pc7vl" [d9f7c494-872d-4437-9900-0beba5e1322a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.005072666s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (122.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222669 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222669 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (2m2.529781763s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (122.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1218 12:20:54.150626  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
E1218 12:20:54.155939  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
E1218 12:20:54.166509  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
E1218 12:20:54.186833  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (1m33.335946835s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-568060 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-568060 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fg59m" [9092d04c-aadf-4291-8c92-dca210a4b1f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1218 12:21:21.904689  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fg59m" [9092d04c-aadf-4291-8c92-dca210a4b1f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005016679s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-568060 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-568060 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-615906 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1218 12:22:16.075281  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-615906 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m21.623171064s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-222669 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08fdd50d-0fd5-4130-8f73-5cf97ff75d68] Pending
helpers_test.go:344: "busybox" [08fdd50d-0fd5-4130-8f73-5cf97ff75d68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [08fdd50d-0fd5-4130-8f73-5cf97ff75d68] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005335116s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-222669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-571040 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [20bdc750-1be1-4f9e-a1f3-c131de291447] Pending
helpers_test.go:344: "busybox" [20bdc750-1be1-4f9e-a1f3-c131de291447] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1218 12:22:48.660492  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
helpers_test.go:344: "busybox" [20bdc750-1be1-4f9e-a1f3-c131de291447] Running
E1218 12:22:51.844317  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:51.849634  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:51.859975  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:51.880290  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:51.920685  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:52.001042  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:52.161464  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:52.482278  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:22:53.122940  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.008134358s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-571040 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-832809 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [854f9b46-eca9-459e-9ef5-94d7c73fcfb7] Pending
helpers_test.go:344: "busybox" [854f9b46-eca9-459e-9ef5-94d7c73fcfb7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [854f9b46-eca9-459e-9ef5-94d7c73fcfb7] Running
E1218 12:22:54.403465  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005521079s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-832809 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-222669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-222669 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-222669 --alsologtostderr -v=3
E1218 12:22:56.316242  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-222669 --alsologtostderr -v=3: (13.143639914s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-571040 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1218 12:22:56.964418  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-571040 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-571040 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-571040 --alsologtostderr -v=3: (13.149075313s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-832809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-832809 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139620998s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-832809 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-832809 --alsologtostderr -v=3
E1218 12:23:02.041817  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:23:02.085119  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-832809 --alsologtostderr -v=3: (13.130969247s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222669 -n no-preload-222669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222669 -n no-preload-222669: exit status 7 (86.144808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-222669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222669 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222669 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (5m37.597666426s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222669 -n no-preload-222669
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-571040 -n old-k8s-version-571040
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-571040 -n old-k8s-version-571040: exit status 7 (84.893877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-571040 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (466.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-571040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1218 12:23:12.325357  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-571040 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m46.444643765s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-571040 -n old-k8s-version-571040
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (466.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832809 -n embed-certs-832809
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832809 -n embed-certs-832809: exit status 7 (101.317727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-832809 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (355.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832809 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (5m55.161239512s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832809 -n embed-certs-832809
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (355.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-615906 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75757c06-3422-4493-866b-cb6a49dae32b] Pending
helpers_test.go:344: "busybox" [75757c06-3422-4493-866b-cb6a49dae32b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75757c06-3422-4493-866b-cb6a49dae32b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005009857s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-615906 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-615906 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-615906 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-615906 --alsologtostderr -v=3
E1218 12:23:32.806323  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:23:36.675958  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.681263  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.691592  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.711778  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.752125  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.832506  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:36.993410  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:37.313829  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:37.954675  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:37.995954  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-615906 --alsologtostderr -v=3: (13.141031923s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906: exit status 7 (97.707447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-615906 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-615906 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1218 12:23:39.235765  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:41.796627  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:46.917611  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:53.201117  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:23:57.158419  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:23:59.418178  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
E1218 12:24:09.813552  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:09.818858  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:09.829147  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:09.849443  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:09.889928  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:09.970348  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:10.130880  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:10.451257  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:11.092379  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:12.373512  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:13.767484  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:24:14.933975  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:17.639099  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:24:20.054483  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:30.295033  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:42.240391  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.245780  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.256871  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.277554  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.318737  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.399793  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.560353  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:42.881084  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:43.522043  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:44.803145  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:47.364308  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:50.775207  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:24:52.485217  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:24:58.600311  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:25:02.725544  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:25:08.565150  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.570486  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.580798  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.601127  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.641459  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.721958  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:08.882391  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:09.203002  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:09.843451  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:11.123878  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:13.684854  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:18.197807  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:25:18.805926  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:23.206095  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:25:29.046207  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:31.735916  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:25:35.687778  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:25:40.634887  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.640164  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.650433  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.670709  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.711031  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.791441  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:40.952476  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:41.272825  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:41.913935  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:43.194237  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:45.754914  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:45.882117  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/auto-568060/client.crt: no such file or directory
E1218 12:25:49.527050  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:25:50.875224  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:25:54.149787  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
E1218 12:26:01.115871  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:26:04.166814  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:26:19.884789  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:19.890110  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:19.900404  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:19.920757  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:19.961144  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:20.041518  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:20.201968  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:20.521200  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
E1218 12:26:20.522218  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:21.162875  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:21.596739  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:26:21.837181  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
E1218 12:26:21.904217  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:26:22.443881  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:25.004145  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:30.125372  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:30.487675  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:26:40.366611  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:26:53.656866  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
E1218 12:27:00.847790  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:27:02.557359  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:27:26.087824  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
E1218 12:27:31.712131  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 12:27:41.809002  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:27:44.950133  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/skaffold-010862/client.crt: no such file or directory
E1218 12:27:48.660065  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/addons-694092/client.crt: no such file or directory
E1218 12:27:51.843864  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:27:52.408059  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:27:56.316480  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/ingress-addon-legacy-119818/client.crt: no such file or directory
E1218 12:28:19.528464  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/calico-568060/client.crt: no such file or directory
E1218 12:28:24.478393  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
E1218 12:28:36.676305  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-615906 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (5m52.884983546s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-q6tdb" [829f4cb6-7ca0-4b01-8d79-82c5056eb26c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1218 12:28:53.201574  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/gvisor-428071/client.crt: no such file or directory
E1218 12:28:59.417765  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/functional-622176/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-q6tdb" [829f4cb6-7ca0-4b01-8d79-82c5056eb26c] Running
E1218 12:29:03.729318  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/custom-flannel-568060/client.crt: no such file or directory
E1218 12:29:04.362102  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/false-568060/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.006444243s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-q6tdb" [829f4cb6-7ca0-4b01-8d79-82c5056eb26c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005377984s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-222669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-22w65" [55f28450-7352-47ae-9bef-3d14eb9ac502] Running
E1218 12:29:09.812906  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004897755s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-222669 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-222669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222669 -n no-preload-222669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222669 -n no-preload-222669: exit status 2 (284.205407ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222669 -n no-preload-222669
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222669 -n no-preload-222669: exit status 2 (305.790641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-222669 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222669 -n no-preload-222669
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-222669 -n no-preload-222669
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-22w65" [55f28450-7352-47ae-9bef-3d14eb9ac502] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006866493s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-832809 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (72.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-035623 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-035623 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m12.593891746s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (72.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832809 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-832809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832809 -n embed-certs-832809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832809 -n embed-certs-832809: exit status 2 (294.152835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832809 -n embed-certs-832809
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832809 -n embed-certs-832809: exit status 2 (284.994289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-832809 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832809 -n embed-certs-832809
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832809 -n embed-certs-832809
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sl64" [11f4f250-a17e-4d75-9c6f-6c12b0b4a037] Running
E1218 12:29:37.497308  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/enable-default-cni-568060/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004713137s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sl64" [11f4f250-a17e-4d75-9c6f-6c12b0b4a037] Running
E1218 12:29:42.241210  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/flannel-568060/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005646066s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-615906 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-615906 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-615906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906: exit status 2 (281.79156ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906: exit status 2 (275.515435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-615906 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-615906 -n default-k8s-diff-port-615906
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-035623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-035623 --alsologtostderr -v=3
E1218 12:30:36.248379  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/bridge-568060/client.crt: no such file or directory
E1218 12:30:40.634899  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-035623 --alsologtostderr -v=3: (13.145347083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-035623 -n newest-cni-035623
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-035623 -n newest-cni-035623: exit status 7 (94.735949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-035623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-035623 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E1218 12:30:54.150658  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kindnet-568060/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-035623 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (51.563762626s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-035623 -n newest-cni-035623
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cwpvz" [67685769-cf49-4cfc-9a5b-0b0167f6a522] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004390024s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cwpvz" [67685769-cf49-4cfc-9a5b-0b0167f6a522] Running
E1218 12:31:08.319045  690739 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17824-683489/.minikube/profiles/kubenet-568060/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003514669s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-571040 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-571040 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-571040 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-571040 -n old-k8s-version-571040
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-571040 -n old-k8s-version-571040: exit status 2 (258.682282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-571040 -n old-k8s-version-571040
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-571040 -n old-k8s-version-571040: exit status 2 (257.938859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-571040 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-571040 -n old-k8s-version-571040
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-571040 -n old-k8s-version-571040
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-035623 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-035623 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-035623 -n newest-cni-035623
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-035623 -n newest-cni-035623: exit status 2 (271.533239ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-035623 -n newest-cni-035623
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-035623 -n newest-cni-035623: exit status 2 (259.189131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-035623 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-035623 -n newest-cni-035623
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-035623 -n newest-cni-035623
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.44s)

                                                
                                    

Test skip (34/328)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
166 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
199 TestKicCustomNetwork 0
200 TestKicExistingNetwork 0
201 TestKicCustomSubnet 0
202 TestKicStaticIP 0
234 TestChangeNoneUser 0
237 TestScheduledStopWindows 0
241 TestInsufficientStorage 0
245 TestMissingContainerUpgrade 0
256 TestNetworkPlugins/group/cilium 4.03
264 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-568060 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-568060" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-568060

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-568060" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-568060"

                                                
                                                
----------------------- debugLogs end: cilium-568060 [took: 3.851910308s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-568060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-568060
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-596295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-596295
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard