Test Report: KVM_Linux 17581

                    
                      8f89b804228acd053c87abbbfb2e31f99595775c:2023-11-14:31875
                    
                

Test fail (5/321)

Order failed test Duration
219 TestMultiNode/serial/RestartKeepsNodes 52.08
220 TestMultiNode/serial/DeleteNode 0.71
221 TestMultiNode/serial/StopMultiNode 116.59
222 TestMultiNode/serial/RestartMultiNode 158.19
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.92
x
+
TestMultiNode/serial/RestartKeepsNodes (52.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-661456
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-661456
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-661456: (28.503699235s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-661456 --wait=true -v=8 --alsologtostderr
E1114 13:57:28.729814   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-661456 --wait=true -v=8 --alsologtostderr: exit status 90 (23.211394278s)

                                                
                                                
-- stdout --
	* [multinode-661456] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-661456 in cluster multinode-661456
	* Restarting existing kvm2 VM for "multinode-661456" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:57:11.824861   28571 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:57:11.825017   28571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:57:11.825029   28571 out.go:309] Setting ErrFile to fd 2...
	I1114 13:57:11.825037   28571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:57:11.825239   28571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:57:11.825819   28571 out.go:303] Setting JSON to false
	I1114 13:57:11.826678   28571 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2382,"bootTime":1699967850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:57:11.826736   28571 start.go:138] virtualization: kvm guest
	I1114 13:57:11.829083   28571 out.go:177] * [multinode-661456] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 13:57:11.830589   28571 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:57:11.831894   28571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:57:11.830615   28571 notify.go:220] Checking for updates...
	I1114 13:57:11.834296   28571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:57:11.835697   28571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:57:11.836868   28571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 13:57:11.838089   28571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:57:11.839736   28571 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:57:11.839850   28571 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:57:11.840286   28571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:11.840358   28571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:11.855156   28571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37179
	I1114 13:57:11.855659   28571 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:11.856292   28571 main.go:141] libmachine: Using API Version  1
	I1114 13:57:11.856312   28571 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:11.856638   28571 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:11.856827   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:11.893668   28571 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 13:57:11.894889   28571 start.go:298] selected driver: kvm2
	I1114 13:57:11.894905   28571 start.go:902] validating driver "kvm2" against &{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:57:11.895038   28571 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:57:11.895357   28571 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:57:11.895429   28571 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17581-6041/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 13:57:11.910280   28571 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 13:57:11.911251   28571 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:57:11.911332   28571 cni.go:84] Creating CNI manager for ""
	I1114 13:57:11.911346   28571 cni.go:136] 3 nodes found, recommending kindnet
	I1114 13:57:11.911355   28571 start_flags.go:323] config:
	{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false ist
io-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:57:11.911697   28571 iso.go:125] acquiring lock: {Name:mk133084c23ed177adc820fc7d96b1f642fbaa07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:57:11.914267   28571 out.go:177] * Starting control plane node multinode-661456 in cluster multinode-661456
	I1114 13:57:11.915660   28571 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 13:57:11.915706   28571 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1114 13:57:11.915718   28571 cache.go:56] Caching tarball of preloaded images
	I1114 13:57:11.915802   28571 preload.go:174] Found /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1114 13:57:11.915812   28571 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1114 13:57:11.915931   28571 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 13:57:11.916125   28571 start.go:365] acquiring machines lock for multinode-661456: {Name:mka8a7be0fef2cfa89eb7b4f7f1c7ded4441f603 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 13:57:11.916166   28571 start.go:369] acquired machines lock for "multinode-661456" in 22.084µs
	I1114 13:57:11.916178   28571 start.go:96] Skipping create...Using existing machine configuration
	I1114 13:57:11.916183   28571 fix.go:54] fixHost starting: 
	I1114 13:57:11.916423   28571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:11.916452   28571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:11.930702   28571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I1114 13:57:11.931104   28571 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:11.931560   28571 main.go:141] libmachine: Using API Version  1
	I1114 13:57:11.931581   28571 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:11.931899   28571 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:11.932116   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:11.932320   28571 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 13:57:11.933966   28571 fix.go:102] recreateIfNeeded on multinode-661456: state=Stopped err=<nil>
	I1114 13:57:11.934014   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	W1114 13:57:11.934210   28571 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 13:57:11.935976   28571 out.go:177] * Restarting existing kvm2 VM for "multinode-661456" ...
	I1114 13:57:11.937162   28571 main.go:141] libmachine: (multinode-661456) Calling .Start
	I1114 13:57:11.937380   28571 main.go:141] libmachine: (multinode-661456) Ensuring networks are active...
	I1114 13:57:11.938279   28571 main.go:141] libmachine: (multinode-661456) Ensuring network default is active
	I1114 13:57:11.938604   28571 main.go:141] libmachine: (multinode-661456) Ensuring network mk-multinode-661456 is active
	I1114 13:57:11.938974   28571 main.go:141] libmachine: (multinode-661456) Getting domain xml...
	I1114 13:57:11.939827   28571 main.go:141] libmachine: (multinode-661456) Creating domain...
	I1114 13:57:13.154989   28571 main.go:141] libmachine: (multinode-661456) Waiting to get IP...
	I1114 13:57:13.155871   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:13.156228   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:13.156299   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:13.156206   28601 retry.go:31] will retry after 261.928802ms: waiting for machine to come up
	I1114 13:57:13.419522   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:13.420075   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:13.420105   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:13.420021   28601 retry.go:31] will retry after 315.715988ms: waiting for machine to come up
	I1114 13:57:13.737458   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:13.737757   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:13.737794   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:13.737719   28601 retry.go:31] will retry after 399.993432ms: waiting for machine to come up
	I1114 13:57:14.139181   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:14.139585   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:14.139615   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:14.139528   28601 retry.go:31] will retry after 542.43724ms: waiting for machine to come up
	I1114 13:57:14.683215   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:14.683678   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:14.683698   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:14.683647   28601 retry.go:31] will retry after 590.611775ms: waiting for machine to come up
	I1114 13:57:15.275386   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:15.275879   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:15.275910   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:15.275830   28601 retry.go:31] will retry after 654.692582ms: waiting for machine to come up
	I1114 13:57:15.931636   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:15.932086   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:15.932116   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:15.932043   28601 retry.go:31] will retry after 1.052102644s: waiting for machine to come up
	I1114 13:57:16.985491   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:16.985895   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:16.985931   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:16.985839   28601 retry.go:31] will retry after 1.302276607s: waiting for machine to come up
	I1114 13:57:18.290213   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:18.290663   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:18.290692   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:18.290589   28601 retry.go:31] will retry after 1.42533191s: waiting for machine to come up
	I1114 13:57:19.718027   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:19.718335   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:19.718355   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:19.718294   28601 retry.go:31] will retry after 1.630670185s: waiting for machine to come up
	I1114 13:57:21.351089   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:21.351425   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:21.351456   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:21.351373   28601 retry.go:31] will retry after 2.482473063s: waiting for machine to come up
	I1114 13:57:23.836544   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:23.836953   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:23.837036   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:23.836931   28601 retry.go:31] will retry after 2.188031267s: waiting for machine to come up
	I1114 13:57:26.028374   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:26.028877   28571 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:57:26.028909   28571 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:57:26.028837   28601 retry.go:31] will retry after 3.818676439s: waiting for machine to come up
	I1114 13:57:29.850987   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.851391   28571 main.go:141] libmachine: (multinode-661456) Found IP for machine: 192.168.39.222
	I1114 13:57:29.851414   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has current primary IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.851435   28571 main.go:141] libmachine: (multinode-661456) Reserving static IP address...
	I1114 13:57:29.851908   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "multinode-661456", mac: "52:54:00:f9:71:4b", ip: "192.168.39.222"} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:29.851948   28571 main.go:141] libmachine: (multinode-661456) DBG | skip adding static IP to network mk-multinode-661456 - found existing host DHCP lease matching {name: "multinode-661456", mac: "52:54:00:f9:71:4b", ip: "192.168.39.222"}
	I1114 13:57:29.851967   28571 main.go:141] libmachine: (multinode-661456) Reserved static IP address: 192.168.39.222
	I1114 13:57:29.851982   28571 main.go:141] libmachine: (multinode-661456) Waiting for SSH to be available...
	I1114 13:57:29.851999   28571 main.go:141] libmachine: (multinode-661456) DBG | Getting to WaitForSSH function...
	I1114 13:57:29.854154   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.854536   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:29.854567   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.854638   28571 main.go:141] libmachine: (multinode-661456) DBG | Using SSH client type: external
	I1114 13:57:29.854667   28571 main.go:141] libmachine: (multinode-661456) DBG | Using SSH private key: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa (-rw-------)
	I1114 13:57:29.854708   28571 main.go:141] libmachine: (multinode-661456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 13:57:29.854723   28571 main.go:141] libmachine: (multinode-661456) DBG | About to run SSH command:
	I1114 13:57:29.854746   28571 main.go:141] libmachine: (multinode-661456) DBG | exit 0
	I1114 13:57:29.949236   28571 main.go:141] libmachine: (multinode-661456) DBG | SSH cmd err, output: <nil>: 
	I1114 13:57:29.949702   28571 main.go:141] libmachine: (multinode-661456) Calling .GetConfigRaw
	I1114 13:57:29.950309   28571 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:57:29.952677   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.953028   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:29.953061   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.953299   28571 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 13:57:29.953570   28571 machine.go:88] provisioning docker machine ...
	I1114 13:57:29.953590   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:29.953792   28571 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:57:29.953980   28571 buildroot.go:166] provisioning hostname "multinode-661456"
	I1114 13:57:29.954002   28571 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:57:29.954137   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:29.956039   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.956325   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:29.956366   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:29.956468   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:29.956637   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:29.956793   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:29.956969   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:29.957175   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:29.957557   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:29.957574   28571 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-661456 && echo "multinode-661456" | sudo tee /etc/hostname
	I1114 13:57:30.097795   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-661456
	
	I1114 13:57:30.097828   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.100512   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.100837   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.100894   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.100987   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:30.101200   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.101365   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.101521   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:30.101699   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:30.102082   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:30.102106   28571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-661456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-661456/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-661456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:57:30.237110   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:57:30.237144   28571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17581-6041/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-6041/.minikube}
	I1114 13:57:30.237166   28571 buildroot.go:174] setting up certificates
	I1114 13:57:30.237179   28571 provision.go:83] configureAuth start
	I1114 13:57:30.237193   28571 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:57:30.237507   28571 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:57:30.240116   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.240513   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.240543   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.240637   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.242755   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.243063   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.243094   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.243193   28571 provision.go:138] copyHostCerts
	I1114 13:57:30.243220   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 13:57:30.243250   28571 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem, removing ...
	I1114 13:57:30.243265   28571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 13:57:30.243331   28571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem (1082 bytes)
	I1114 13:57:30.243401   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 13:57:30.243423   28571 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem, removing ...
	I1114 13:57:30.243429   28571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 13:57:30.243451   28571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem (1123 bytes)
	I1114 13:57:30.243502   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 13:57:30.243519   28571 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem, removing ...
	I1114 13:57:30.243525   28571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 13:57:30.243544   28571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem (1675 bytes)
	I1114 13:57:30.243589   28571 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem org=jenkins.multinode-661456 san=[192.168.39.222 192.168.39.222 localhost 127.0.0.1 minikube multinode-661456]
	I1114 13:57:30.392547   28571 provision.go:172] copyRemoteCerts
	I1114 13:57:30.392601   28571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:57:30.392621   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.395519   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.395828   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.395853   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.396048   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:30.396254   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.396414   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:30.396531   28571 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:57:30.491774   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 13:57:30.491850   28571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 13:57:30.514631   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 13:57:30.514710   28571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1114 13:57:30.537003   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 13:57:30.537088   28571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:57:30.559683   28571 provision.go:86] duration metric: configureAuth took 322.490315ms
	I1114 13:57:30.559728   28571 buildroot.go:189] setting minikube options for container-runtime
	I1114 13:57:30.560049   28571 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:57:30.560076   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:30.560355   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.563331   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.563743   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.563769   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.563985   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:30.564189   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.564358   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.564541   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:30.564721   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:30.565092   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:30.565106   28571 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1114 13:57:30.695509   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1114 13:57:30.695536   28571 buildroot.go:70] root file system type: tmpfs
	I1114 13:57:30.695672   28571 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1114 13:57:30.695702   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.698413   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.698801   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.698823   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.699016   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:30.699212   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.699379   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.699505   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:30.699660   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:30.699986   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:30.700047   28571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1114 13:57:30.842955   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1114 13:57:30.842995   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:30.845725   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.846101   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:30.846122   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:30.846327   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:30.846524   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.846664   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:30.846786   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:30.846957   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:30.847443   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:30.847473   28571 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1114 13:57:31.849842   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1114 13:57:31.849875   28571 machine.go:91] provisioned docker machine in 1.896289194s
	I1114 13:57:31.849894   28571 start.go:300] post-start starting for "multinode-661456" (driver="kvm2")
	I1114 13:57:31.849906   28571 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:57:31.849935   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:31.850257   28571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:57:31.850288   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:31.852965   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:31.853277   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:31.853307   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:31.853407   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:31.853606   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:31.853749   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:31.853906   28571 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:57:31.947901   28571 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:57:31.952874   28571 command_runner.go:130] > NAME=Buildroot
	I1114 13:57:31.952899   28571 command_runner.go:130] > VERSION=2021.02.12-1-gccdd192-dirty
	I1114 13:57:31.952905   28571 command_runner.go:130] > ID=buildroot
	I1114 13:57:31.952916   28571 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 13:57:31.952923   28571 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 13:57:31.952962   28571 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 13:57:31.952982   28571 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/addons for local assets ...
	I1114 13:57:31.953056   28571 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/files for local assets ...
	I1114 13:57:31.953159   28571 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> 132382.pem in /etc/ssl/certs
	I1114 13:57:31.953173   28571 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /etc/ssl/certs/132382.pem
	I1114 13:57:31.953253   28571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 13:57:31.962795   28571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /etc/ssl/certs/132382.pem (1708 bytes)
	I1114 13:57:31.988154   28571 start.go:303] post-start completed in 138.2437ms
	I1114 13:57:31.988183   28571 fix.go:56] fixHost completed within 20.071999351s
	I1114 13:57:31.988202   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:31.991013   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:31.991386   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:31.991422   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:31.991627   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:31.991836   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:31.991990   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:31.992098   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:31.992359   28571 main.go:141] libmachine: Using SSH client type: native
	I1114 13:57:31.992683   28571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:57:31.992694   28571 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1114 13:57:32.122094   28571 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699970252.073332540
	
	I1114 13:57:32.122125   28571 fix.go:206] guest clock: 1699970252.073332540
	I1114 13:57:32.122134   28571 fix.go:219] Guest: 2023-11-14 13:57:32.07333254 +0000 UTC Remote: 2023-11-14 13:57:31.988186528 +0000 UTC m=+20.213043728 (delta=85.146012ms)
	I1114 13:57:32.122159   28571 fix.go:190] guest clock delta is within tolerance: 85.146012ms
	I1114 13:57:32.122169   28571 start.go:83] releasing machines lock for "multinode-661456", held for 20.20598984s
	I1114 13:57:32.122194   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:32.122445   28571 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:57:32.125144   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.125525   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:32.125551   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.125674   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:32.126211   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:32.126371   28571 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:32.126453   28571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:57:32.126497   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:32.126581   28571 ssh_runner.go:195] Run: cat /version.json
	I1114 13:57:32.126606   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:32.129153   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.129411   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.129481   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:32.129503   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.129674   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:32.129874   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:32.129885   28571 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:32.129903   28571 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:32.130046   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:32.130048   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:32.130206   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:32.130233   28571 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:57:32.130362   28571 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:32.130494   28571 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:57:32.222386   28571 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "4770ca4ce6b6c59d35dfae229a3cdfca5570c673"}
	I1114 13:57:32.222824   28571 ssh_runner.go:195] Run: systemctl --version
	I1114 13:57:32.246099   28571 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 13:57:32.246178   28571 command_runner.go:130] > systemd 247 (247)
	I1114 13:57:32.246221   28571 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1114 13:57:32.246292   28571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:57:32.251661   28571 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 13:57:32.251777   28571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 13:57:32.251845   28571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:57:32.268711   28571 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 13:57:32.268765   28571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 13:57:32.268779   28571 start.go:472] detecting cgroup driver to use...
	I1114 13:57:32.268887   28571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:57:32.286692   28571 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1114 13:57:32.287082   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 13:57:32.298180   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 13:57:32.308614   28571 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 13:57:32.308671   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 13:57:32.319100   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:57:32.329815   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 13:57:32.340343   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:57:32.351143   28571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:57:32.362773   28571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 13:57:32.373736   28571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:57:32.383248   28571 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 13:57:32.383330   28571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:57:32.392957   28571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:57:32.505953   28571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 13:57:32.523143   28571 start.go:472] detecting cgroup driver to use...
	I1114 13:57:32.523213   28571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1114 13:57:32.539207   28571 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1114 13:57:32.539884   28571 command_runner.go:130] > [Unit]
	I1114 13:57:32.539907   28571 command_runner.go:130] > Description=Docker Application Container Engine
	I1114 13:57:32.539917   28571 command_runner.go:130] > Documentation=https://docs.docker.com
	I1114 13:57:32.539927   28571 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1114 13:57:32.539937   28571 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1114 13:57:32.539946   28571 command_runner.go:130] > StartLimitBurst=3
	I1114 13:57:32.539951   28571 command_runner.go:130] > StartLimitIntervalSec=60
	I1114 13:57:32.539955   28571 command_runner.go:130] > [Service]
	I1114 13:57:32.539959   28571 command_runner.go:130] > Type=notify
	I1114 13:57:32.539963   28571 command_runner.go:130] > Restart=on-failure
	I1114 13:57:32.539975   28571 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1114 13:57:32.539983   28571 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1114 13:57:32.540004   28571 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1114 13:57:32.540015   28571 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1114 13:57:32.540030   28571 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1114 13:57:32.540041   28571 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1114 13:57:32.540051   28571 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1114 13:57:32.540061   28571 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1114 13:57:32.540070   28571 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1114 13:57:32.540077   28571 command_runner.go:130] > ExecStart=
	I1114 13:57:32.540104   28571 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1114 13:57:32.540122   28571 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1114 13:57:32.540136   28571 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1114 13:57:32.540147   28571 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1114 13:57:32.540154   28571 command_runner.go:130] > LimitNOFILE=infinity
	I1114 13:57:32.540158   28571 command_runner.go:130] > LimitNPROC=infinity
	I1114 13:57:32.540163   28571 command_runner.go:130] > LimitCORE=infinity
	I1114 13:57:32.540173   28571 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1114 13:57:32.540187   28571 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1114 13:57:32.540197   28571 command_runner.go:130] > TasksMax=infinity
	I1114 13:57:32.540206   28571 command_runner.go:130] > TimeoutStartSec=0
	I1114 13:57:32.540221   28571 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1114 13:57:32.540230   28571 command_runner.go:130] > Delegate=yes
	I1114 13:57:32.540241   28571 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1114 13:57:32.540246   28571 command_runner.go:130] > KillMode=process
	I1114 13:57:32.540250   28571 command_runner.go:130] > [Install]
	I1114 13:57:32.540260   28571 command_runner.go:130] > WantedBy=multi-user.target
	I1114 13:57:32.541009   28571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:57:32.556118   28571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 13:57:32.585313   28571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:57:32.597818   28571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:57:32.611253   28571 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 13:57:32.643000   28571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:57:32.655965   28571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:57:32.674145   28571 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1114 13:57:32.674224   28571 ssh_runner.go:195] Run: which cri-dockerd
	I1114 13:57:32.677923   28571 command_runner.go:130] > /usr/bin/cri-dockerd
	I1114 13:57:32.678061   28571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1114 13:57:32.686988   28571 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1114 13:57:32.703837   28571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1114 13:57:32.810888   28571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1114 13:57:32.927049   28571 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1114 13:57:32.927155   28571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1114 13:57:32.944084   28571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:57:33.047823   28571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1114 13:57:34.512012   28571 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.464153983s)
	I1114 13:57:34.512070   28571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 13:57:34.619920   28571 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1114 13:57:34.739199   28571 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 13:57:34.853825   28571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:57:34.955955   28571 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1114 13:57:34.972061   28571 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1114 13:57:34.974249   28571 out.go:177] 
	W1114 13:57:34.975623   28571 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1114 13:57:34.975644   28571 out.go:239] * 
	* 
	W1114 13:57:34.976637   28571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 13:57:34.977938   28571 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-661456" : exit status 90
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-661456
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456: exit status 6 (238.355223ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 13:57:35.275341   28741 status.go:415] kubeconfig endpoint: extract IP: "multinode-661456" does not appear in /home/jenkins/minikube-integration/17581-6041/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-661456" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (52.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 node delete m03: exit status 119 (202.718826ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-661456"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p multinode-661456"

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-661456 node delete m03": exit status 119
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr: exit status 7 (268.020826ms)

                                                
                                                
-- stdout --
	multinode-661456
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`
	multinode-661456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-661456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:57:35.545619   28795 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:57:35.545882   28795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:57:35.545891   28795 out.go:309] Setting ErrFile to fd 2...
	I1114 13:57:35.545898   28795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:57:35.546107   28795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:57:35.546294   28795 out.go:303] Setting JSON to false
	I1114 13:57:35.546329   28795 mustload.go:65] Loading cluster: multinode-661456
	I1114 13:57:35.546430   28795 notify.go:220] Checking for updates...
	I1114 13:57:35.546758   28795 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:57:35.546774   28795 status.go:255] checking status of multinode-661456 ...
	I1114 13:57:35.547190   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:35.547260   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:35.561494   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I1114 13:57:35.561893   28795 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:35.562391   28795 main.go:141] libmachine: Using API Version  1
	I1114 13:57:35.562406   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:35.562691   28795 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:35.562839   28795 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 13:57:35.564340   28795 status.go:330] multinode-661456 host status = "Running" (err=<nil>)
	I1114 13:57:35.564353   28795 host.go:66] Checking if "multinode-661456" exists ...
	I1114 13:57:35.564647   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:35.564696   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:35.578863   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40903
	I1114 13:57:35.579220   28795 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:35.579608   28795 main.go:141] libmachine: Using API Version  1
	I1114 13:57:35.579628   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:35.579893   28795 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:35.580074   28795 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:57:35.582440   28795 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:35.582802   28795 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:35.582833   28795 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:35.582932   28795 host.go:66] Checking if "multinode-661456" exists ...
	I1114 13:57:35.583209   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:35.583243   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:35.597091   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34607
	I1114 13:57:35.597504   28795 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:35.597929   28795 main.go:141] libmachine: Using API Version  1
	I1114 13:57:35.597952   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:35.598206   28795 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:35.598369   28795 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:57:35.598532   28795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:57:35.598565   28795 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:57:35.601404   28795 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:35.601839   28795 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:57:35.601863   28795 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:57:35.601974   28795 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:57:35.602168   28795 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:57:35.602335   28795 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:57:35.602480   28795 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:57:35.692400   28795 ssh_runner.go:195] Run: systemctl --version
	I1114 13:57:35.697874   28795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	E1114 13:57:35.710939   28795 status.go:415] kubeconfig endpoint: extract IP: "multinode-661456" does not appear in /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:57:35.710970   28795 api_server.go:166] Checking apiserver status ...
	I1114 13:57:35.711001   28795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:57:35.722879   28795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:57:35.722898   28795 status.go:421] multinode-661456 apiserver status = Stopped (err=<nil>)
	I1114 13:57:35.722907   28795 status.go:257] multinode-661456 status: &{Name:multinode-661456 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:57:35.722921   28795 status.go:255] checking status of multinode-661456-m02 ...
	I1114 13:57:35.723214   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:35.723249   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:35.737700   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I1114 13:57:35.738191   28795 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:35.738652   28795 main.go:141] libmachine: Using API Version  1
	I1114 13:57:35.738676   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:35.739037   28795 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:35.739223   28795 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 13:57:35.740851   28795 status.go:330] multinode-661456-m02 host status = "Stopped" (err=<nil>)
	I1114 13:57:35.740868   28795 status.go:343] host is not running, skipping remaining checks
	I1114 13:57:35.740875   28795 status.go:257] multinode-661456-m02 status: &{Name:multinode-661456-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:57:35.740893   28795 status.go:255] checking status of multinode-661456-m03 ...
	I1114 13:57:35.741168   28795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:57:35.741203   28795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:57:35.755609   28795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I1114 13:57:35.756037   28795 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:57:35.756504   28795 main.go:141] libmachine: Using API Version  1
	I1114 13:57:35.756525   28795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:57:35.756841   28795 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:57:35.757018   28795 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 13:57:35.758411   28795 status.go:330] multinode-661456-m03 host status = "Stopped" (err=<nil>)
	I1114 13:57:35.758426   28795 status.go:343] host is not running, skipping remaining checks
	I1114 13:57:35.758431   28795 status.go:257] multinode-661456-m03 status: &{Name:multinode-661456-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456: exit status 6 (243.274286ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 13:57:35.990534   28836 status.go:415] kubeconfig endpoint: extract IP: "multinode-661456" does not appear in /home/jenkins/minikube-integration/17581-6041/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-661456" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/DeleteNode (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (116.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 stop
E1114 13:57:50.836684   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:57:56.416932   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:58:58.751878   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:59:13.885211   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-661456 stop: (1m56.295492475s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 status: exit status 7 (110.246495ms)

                                                
                                                
-- stdout --
	multinode-661456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-661456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-661456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr: exit status 7 (113.011119ms)

                                                
                                                
-- stdout --
	multinode-661456
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-661456-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-661456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:59:32.463480   29222 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:59:32.463628   29222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:32.463638   29222 out.go:309] Setting ErrFile to fd 2...
	I1114 13:59:32.463645   29222 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:32.463878   29222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:59:32.464062   29222 out.go:303] Setting JSON to false
	I1114 13:59:32.464099   29222 mustload.go:65] Loading cluster: multinode-661456
	I1114 13:59:32.464197   29222 notify.go:220] Checking for updates...
	I1114 13:59:32.464519   29222 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:59:32.464535   29222 status.go:255] checking status of multinode-661456 ...
	I1114 13:59:32.464924   29222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.465011   29222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.482728   29222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1114 13:59:32.483214   29222 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.483819   29222 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.483844   29222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.484236   29222 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.484410   29222 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 13:59:32.486104   29222 status.go:330] multinode-661456 host status = "Stopped" (err=<nil>)
	I1114 13:59:32.486118   29222 status.go:343] host is not running, skipping remaining checks
	I1114 13:59:32.486125   29222 status.go:257] multinode-661456 status: &{Name:multinode-661456 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:59:32.486174   29222 status.go:255] checking status of multinode-661456-m02 ...
	I1114 13:59:32.486456   29222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.486501   29222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.500259   29222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38597
	I1114 13:59:32.500622   29222 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.501061   29222 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.501085   29222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.501375   29222 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.501547   29222 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 13:59:32.502970   29222 status.go:330] multinode-661456-m02 host status = "Stopped" (err=<nil>)
	I1114 13:59:32.502982   29222 status.go:343] host is not running, skipping remaining checks
	I1114 13:59:32.502995   29222 status.go:257] multinode-661456-m02 status: &{Name:multinode-661456-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:59:32.503033   29222 status.go:255] checking status of multinode-661456-m03 ...
	I1114 13:59:32.503343   29222 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.503400   29222 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.518416   29222 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45617
	I1114 13:59:32.518799   29222 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.519296   29222 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.519323   29222 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.519626   29222 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.519790   29222 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 13:59:32.521246   29222 status.go:330] multinode-661456-m03 host status = "Stopped" (err=<nil>)
	I1114 13:59:32.521278   29222 status.go:343] host is not running, skipping remaining checks
	I1114 13:59:32.521285   29222 status.go:257] multinode-661456-m03 status: &{Name:multinode-661456-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr": multinode-661456
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-661456-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-661456-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr": multinode-661456
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-661456-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-661456-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456: exit status 7 (75.197452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661456" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (116.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (158.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-661456 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-661456 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m34.498916129s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
multinode_test.go:366: status says both hosts are not running: args "out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr": 
-- stdout --
	multinode-661456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-661456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-661456-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:02:07.155879   29933 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:02:07.156039   29933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:02:07.156049   29933 out.go:309] Setting ErrFile to fd 2...
	I1114 14:02:07.156053   29933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:02:07.156250   29933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 14:02:07.156445   29933 out.go:303] Setting JSON to false
	I1114 14:02:07.156480   29933 mustload.go:65] Loading cluster: multinode-661456
	I1114 14:02:07.156585   29933 notify.go:220] Checking for updates...
	I1114 14:02:07.156966   29933 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:02:07.156980   29933 status.go:255] checking status of multinode-661456 ...
	I1114 14:02:07.157373   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.157486   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.179768   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I1114 14:02:07.180257   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.180776   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.180799   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.181241   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.181455   29933 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 14:02:07.183338   29933 status.go:330] multinode-661456 host status = "Running" (err=<nil>)
	I1114 14:02:07.183354   29933 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:02:07.183646   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.183695   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.198301   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1114 14:02:07.198741   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.199205   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.199228   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.199566   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.199733   29933 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 14:02:07.202800   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.203206   29933 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:02:07.203244   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.203384   29933 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:02:07.203698   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.203738   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.217692   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I1114 14:02:07.218310   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.218833   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.218856   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.219219   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.219410   29933 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:02:07.219600   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.219631   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:02:07.222255   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.222606   29933 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:02:07.222636   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.222753   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:02:07.222915   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:02:07.223074   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:02:07.223181   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:02:07.317828   29933 ssh_runner.go:195] Run: systemctl --version
	I1114 14:02:07.323906   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.342051   29933 kubeconfig.go:92] found "multinode-661456" server: "https://192.168.39.222:8443"
	I1114 14:02:07.342081   29933 api_server.go:166] Checking apiserver status ...
	I1114 14:02:07.342114   29933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:02:07.356839   29933 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1637/cgroup
	I1114 14:02:07.366786   29933 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/981ae77038f8c46bf3926e079b01ddab39c326273f9475ab0e576535d19654a8"
	I1114 14:02:07.366837   29933 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/981ae77038f8c46bf3926e079b01ddab39c326273f9475ab0e576535d19654a8/freezer.state
	I1114 14:02:07.377647   29933 api_server.go:204] freezer state: "THAWED"
	I1114 14:02:07.377673   29933 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:02:07.382591   29933 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I1114 14:02:07.382619   29933 status.go:421] multinode-661456 apiserver status = Running (err=<nil>)
	I1114 14:02:07.382628   29933 status.go:257] multinode-661456 status: &{Name:multinode-661456 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:02:07.382642   29933 status.go:255] checking status of multinode-661456-m02 ...
	I1114 14:02:07.383069   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.383118   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.398207   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I1114 14:02:07.398574   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.399025   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.399048   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.399351   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.399535   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 14:02:07.400920   29933 status.go:330] multinode-661456-m02 host status = "Running" (err=<nil>)
	I1114 14:02:07.400941   29933 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 14:02:07.401219   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.401265   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.416089   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I1114 14:02:07.416483   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.416865   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.416885   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.417178   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.417358   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:02:07.420018   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.420382   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:02:07.420415   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.420509   29933 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 14:02:07.420813   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.420851   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.436244   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I1114 14:02:07.436669   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.437166   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.437188   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.437541   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.437758   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:02:07.437954   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.437972   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:02:07.441047   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.441497   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:02:07.441535   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.441703   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:02:07.441867   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:02:07.442019   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:02:07.442166   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:02:07.537690   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.552211   29933 status.go:257] multinode-661456-m02 status: &{Name:multinode-661456-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:02:07.552249   29933 status.go:255] checking status of multinode-661456-m03 ...
	I1114 14:02:07.552580   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.552629   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.567258   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1114 14:02:07.567695   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.568167   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.568193   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.568484   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.568690   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 14:02:07.570148   29933 status.go:330] multinode-661456-m03 host status = "Running" (err=<nil>)
	I1114 14:02:07.570164   29933 host.go:66] Checking if "multinode-661456-m03" exists ...
	I1114 14:02:07.570538   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.570584   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.586290   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I1114 14:02:07.586730   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.587233   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.587264   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.587574   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.587722   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:02:07.590391   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.590896   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:02:07.590926   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.591070   29933 host.go:66] Checking if "multinode-661456-m03" exists ...
	I1114 14:02:07.591394   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.591433   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.605813   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I1114 14:02:07.606231   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.606712   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.606732   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.607022   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.607182   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:02:07.607356   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.607381   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:02:07.610177   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.610532   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:02:07.610570   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.610734   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:02:07.610916   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:02:07.611061   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:02:07.611183   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:02:07.705666   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.718378   29933 status.go:257] multinode-661456-m03 status: &{Name:multinode-661456-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:370: status says both kubelets are not running: args "out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr": 
-- stdout --
	multinode-661456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-661456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-661456-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:02:07.155879   29933 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:02:07.156039   29933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:02:07.156049   29933 out.go:309] Setting ErrFile to fd 2...
	I1114 14:02:07.156053   29933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:02:07.156250   29933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 14:02:07.156445   29933 out.go:303] Setting JSON to false
	I1114 14:02:07.156480   29933 mustload.go:65] Loading cluster: multinode-661456
	I1114 14:02:07.156585   29933 notify.go:220] Checking for updates...
	I1114 14:02:07.156966   29933 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:02:07.156980   29933 status.go:255] checking status of multinode-661456 ...
	I1114 14:02:07.157373   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.157486   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.179768   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I1114 14:02:07.180257   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.180776   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.180799   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.181241   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.181455   29933 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 14:02:07.183338   29933 status.go:330] multinode-661456 host status = "Running" (err=<nil>)
	I1114 14:02:07.183354   29933 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:02:07.183646   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.183695   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.198301   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I1114 14:02:07.198741   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.199205   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.199228   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.199566   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.199733   29933 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 14:02:07.202800   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.203206   29933 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:02:07.203244   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.203384   29933 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:02:07.203698   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.203738   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.217692   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45183
	I1114 14:02:07.218310   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.218833   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.218856   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.219219   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.219410   29933 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:02:07.219600   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.219631   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:02:07.222255   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.222606   29933 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:02:07.222636   29933 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:02:07.222753   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:02:07.222915   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:02:07.223074   29933 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:02:07.223181   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:02:07.317828   29933 ssh_runner.go:195] Run: systemctl --version
	I1114 14:02:07.323906   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.342051   29933 kubeconfig.go:92] found "multinode-661456" server: "https://192.168.39.222:8443"
	I1114 14:02:07.342081   29933 api_server.go:166] Checking apiserver status ...
	I1114 14:02:07.342114   29933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:02:07.356839   29933 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1637/cgroup
	I1114 14:02:07.366786   29933 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/981ae77038f8c46bf3926e079b01ddab39c326273f9475ab0e576535d19654a8"
	I1114 14:02:07.366837   29933 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/981ae77038f8c46bf3926e079b01ddab39c326273f9475ab0e576535d19654a8/freezer.state
	I1114 14:02:07.377647   29933 api_server.go:204] freezer state: "THAWED"
	I1114 14:02:07.377673   29933 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:02:07.382591   29933 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I1114 14:02:07.382619   29933 status.go:421] multinode-661456 apiserver status = Running (err=<nil>)
	I1114 14:02:07.382628   29933 status.go:257] multinode-661456 status: &{Name:multinode-661456 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:02:07.382642   29933 status.go:255] checking status of multinode-661456-m02 ...
	I1114 14:02:07.383069   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.383118   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.398207   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I1114 14:02:07.398574   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.399025   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.399048   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.399351   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.399535   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 14:02:07.400920   29933 status.go:330] multinode-661456-m02 host status = "Running" (err=<nil>)
	I1114 14:02:07.400941   29933 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 14:02:07.401219   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.401265   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.416089   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I1114 14:02:07.416483   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.416865   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.416885   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.417178   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.417358   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:02:07.420018   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.420382   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:02:07.420415   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.420509   29933 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 14:02:07.420813   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.420851   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.436244   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I1114 14:02:07.436669   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.437166   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.437188   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.437541   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.437758   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:02:07.437954   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.437972   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:02:07.441047   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.441497   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:02:07.441535   29933 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:02:07.441703   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:02:07.441867   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:02:07.442019   29933 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:02:07.442166   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:02:07.537690   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.552211   29933 status.go:257] multinode-661456-m02 status: &{Name:multinode-661456-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:02:07.552249   29933 status.go:255] checking status of multinode-661456-m03 ...
	I1114 14:02:07.552580   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.552629   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.567258   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1114 14:02:07.567695   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.568167   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.568193   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.568484   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.568690   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 14:02:07.570148   29933 status.go:330] multinode-661456-m03 host status = "Running" (err=<nil>)
	I1114 14:02:07.570164   29933 host.go:66] Checking if "multinode-661456-m03" exists ...
	I1114 14:02:07.570538   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.570584   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.586290   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I1114 14:02:07.586730   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.587233   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.587264   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.587574   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.587722   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:02:07.590391   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.590896   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:02:07.590926   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.591070   29933 host.go:66] Checking if "multinode-661456-m03" exists ...
	I1114 14:02:07.591394   29933 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:02:07.591433   29933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:02:07.605813   29933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I1114 14:02:07.606231   29933 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:02:07.606712   29933 main.go:141] libmachine: Using API Version  1
	I1114 14:02:07.606732   29933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:02:07.607022   29933 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:02:07.607182   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:02:07.607356   29933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:02:07.607381   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:02:07.610177   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.610532   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:02:07.610570   29933 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:02:07.610734   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:02:07.610916   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:02:07.611061   29933 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:02:07.611183   29933 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:02:07.705666   29933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:07.718378   29933 status.go:257] multinode-661456-m03 status: &{Name:multinode-661456-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
multinode_test.go:387: expected 2 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-661456 -n multinode-661456
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-661456 logs -n 25: (1.673210802s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-661456 cp multinode-661456-m02:/home/docker/cp-test.txt                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456:/home/docker/cp-test_multinode-661456-m02_multinode-661456.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n multinode-661456 sudo cat                                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | /home/docker/cp-test_multinode-661456-m02_multinode-661456.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-661456 cp multinode-661456-m02:/home/docker/cp-test.txt                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03:/home/docker/cp-test_multinode-661456-m02_multinode-661456-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n multinode-661456-m03 sudo cat                                   | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | /home/docker/cp-test_multinode-661456-m02_multinode-661456-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-661456 cp testdata/cp-test.txt                                                | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile422828335/001/cp-test_multinode-661456-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456:/home/docker/cp-test_multinode-661456-m03_multinode-661456.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n multinode-661456 sudo cat                                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | /home/docker/cp-test_multinode-661456-m03_multinode-661456.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt                       | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m02:/home/docker/cp-test_multinode-661456-m03_multinode-661456-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n                                                                 | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | multinode-661456-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-661456 ssh -n multinode-661456-m02 sudo cat                                   | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | /home/docker/cp-test_multinode-661456-m03_multinode-661456-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-661456 node stop m03                                                          | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	| node    | multinode-661456 node start                                                             | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-661456                                                                | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC |                     |
	| stop    | -p multinode-661456                                                                     | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:57 UTC |
	| start   | -p multinode-661456                                                                     | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:57 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-661456                                                                | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:57 UTC |                     |
	| node    | multinode-661456 node delete                                                            | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:57 UTC |                     |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-661456 stop                                                                   | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:57 UTC | 14 Nov 23 13:59 UTC |
	| start   | -p multinode-661456                                                                     | multinode-661456 | jenkins | v1.32.0 | 14 Nov 23 13:59 UTC | 14 Nov 23 14:02 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:59:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:59:32.653455   29270 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:59:32.653632   29270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:32.653642   29270 out.go:309] Setting ErrFile to fd 2...
	I1114 13:59:32.653650   29270 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:32.653867   29270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:59:32.654430   29270 out.go:303] Setting JSON to false
	I1114 13:59:32.655298   29270 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2523,"bootTime":1699967850,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:59:32.655361   29270 start.go:138] virtualization: kvm guest
	I1114 13:59:32.657815   29270 out.go:177] * [multinode-661456] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 13:59:32.659107   29270 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:59:32.659126   29270 notify.go:220] Checking for updates...
	I1114 13:59:32.661533   29270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:59:32.662880   29270 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:59:32.664261   29270 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:59:32.665567   29270 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 13:59:32.666833   29270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:59:32.668411   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:59:32.668883   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.668926   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.683410   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I1114 13:59:32.683907   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.684921   29270 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.685025   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.686155   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.686348   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:32.686587   29270 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:59:32.686884   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.686909   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.701276   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I1114 13:59:32.701770   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.702208   29270 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.702235   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.702538   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.702745   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:32.738599   29270 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 13:59:32.739835   29270 start.go:298] selected driver: kvm2
	I1114 13:59:32.739848   29270 start.go:902] validating driver "kvm2" against &{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:59:32.739976   29270 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:59:32.740342   29270 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:59:32.740436   29270 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17581-6041/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 13:59:32.755433   29270 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 13:59:32.756143   29270 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:59:32.756201   29270 cni.go:84] Creating CNI manager for ""
	I1114 13:59:32.756220   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 13:59:32.756230   29270 start_flags.go:323] config:
	{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fal
se istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:59:32.756515   29270 iso.go:125] acquiring lock: {Name:mk133084c23ed177adc820fc7d96b1f642fbaa07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:59:32.758987   29270 out.go:177] * Starting control plane node multinode-661456 in cluster multinode-661456
	I1114 13:59:32.760359   29270 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 13:59:32.760406   29270 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1114 13:59:32.760417   29270 cache.go:56] Caching tarball of preloaded images
	I1114 13:59:32.760488   29270 preload.go:174] Found /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1114 13:59:32.760499   29270 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1114 13:59:32.760604   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 13:59:32.760785   29270 start.go:365] acquiring machines lock for multinode-661456: {Name:mka8a7be0fef2cfa89eb7b4f7f1c7ded4441f603 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 13:59:32.760822   29270 start.go:369] acquired machines lock for "multinode-661456" in 19.737µs
	I1114 13:59:32.760836   29270 start.go:96] Skipping create...Using existing machine configuration
	I1114 13:59:32.760840   29270 fix.go:54] fixHost starting: 
	I1114 13:59:32.761084   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:59:32.761107   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:59:32.774992   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I1114 13:59:32.775408   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:59:32.775905   29270 main.go:141] libmachine: Using API Version  1
	I1114 13:59:32.775927   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:59:32.776456   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:59:32.776648   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:32.776799   29270 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 13:59:32.778476   29270 fix.go:102] recreateIfNeeded on multinode-661456: state=Stopped err=<nil>
	I1114 13:59:32.778522   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	W1114 13:59:32.778700   29270 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 13:59:32.781136   29270 out.go:177] * Restarting existing kvm2 VM for "multinode-661456" ...
	I1114 13:59:32.782469   29270 main.go:141] libmachine: (multinode-661456) Calling .Start
	I1114 13:59:32.782648   29270 main.go:141] libmachine: (multinode-661456) Ensuring networks are active...
	I1114 13:59:32.783386   29270 main.go:141] libmachine: (multinode-661456) Ensuring network default is active
	I1114 13:59:32.783697   29270 main.go:141] libmachine: (multinode-661456) Ensuring network mk-multinode-661456 is active
	I1114 13:59:32.784046   29270 main.go:141] libmachine: (multinode-661456) Getting domain xml...
	I1114 13:59:32.784801   29270 main.go:141] libmachine: (multinode-661456) Creating domain...
	I1114 13:59:33.990733   29270 main.go:141] libmachine: (multinode-661456) Waiting to get IP...
	I1114 13:59:33.991704   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:33.992098   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:33.992174   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:33.992083   29305 retry.go:31] will retry after 221.157106ms: waiting for machine to come up
	I1114 13:59:34.214457   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:34.214877   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:34.214907   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:34.214847   29305 retry.go:31] will retry after 263.452803ms: waiting for machine to come up
	I1114 13:59:34.480540   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:34.480862   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:34.480889   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:34.480828   29305 retry.go:31] will retry after 448.0835ms: waiting for machine to come up
	I1114 13:59:34.930239   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:34.930628   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:34.930658   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:34.930588   29305 retry.go:31] will retry after 371.271604ms: waiting for machine to come up
	I1114 13:59:35.303088   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:35.303480   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:35.303512   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:35.303424   29305 retry.go:31] will retry after 746.644051ms: waiting for machine to come up
	I1114 13:59:36.051268   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:36.051796   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:36.051840   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:36.051664   29305 retry.go:31] will retry after 617.676421ms: waiting for machine to come up
	I1114 13:59:36.670417   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:36.670837   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:36.670865   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:36.670800   29305 retry.go:31] will retry after 1.067995707s: waiting for machine to come up
	I1114 13:59:37.739915   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:37.740271   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:37.740312   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:37.740242   29305 retry.go:31] will retry after 1.013302151s: waiting for machine to come up
	I1114 13:59:38.755373   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:38.755787   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:38.755818   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:38.755735   29305 retry.go:31] will retry after 1.196399382s: waiting for machine to come up
	I1114 13:59:39.954118   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:39.954598   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:39.954627   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:39.954539   29305 retry.go:31] will retry after 2.181851718s: waiting for machine to come up
	I1114 13:59:42.138873   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:42.139263   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:42.139297   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:42.139204   29305 retry.go:31] will retry after 2.686638601s: waiting for machine to come up
	I1114 13:59:44.828356   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:44.828873   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:44.828901   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:44.828812   29305 retry.go:31] will retry after 2.95958375s: waiting for machine to come up
	I1114 13:59:47.789524   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:47.790004   29270 main.go:141] libmachine: (multinode-661456) DBG | unable to find current IP address of domain multinode-661456 in network mk-multinode-661456
	I1114 13:59:47.790033   29270 main.go:141] libmachine: (multinode-661456) DBG | I1114 13:59:47.789942   29305 retry.go:31] will retry after 3.82733193s: waiting for machine to come up
	I1114 13:59:51.621921   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.622315   29270 main.go:141] libmachine: (multinode-661456) Found IP for machine: 192.168.39.222
	I1114 13:59:51.622343   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has current primary IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.622352   29270 main.go:141] libmachine: (multinode-661456) Reserving static IP address...
	I1114 13:59:51.622806   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "multinode-661456", mac: "52:54:00:f9:71:4b", ip: "192.168.39.222"} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:51.622838   29270 main.go:141] libmachine: (multinode-661456) DBG | skip adding static IP to network mk-multinode-661456 - found existing host DHCP lease matching {name: "multinode-661456", mac: "52:54:00:f9:71:4b", ip: "192.168.39.222"}
	I1114 13:59:51.622853   29270 main.go:141] libmachine: (multinode-661456) Reserved static IP address: 192.168.39.222
	I1114 13:59:51.622868   29270 main.go:141] libmachine: (multinode-661456) Waiting for SSH to be available...
	I1114 13:59:51.622890   29270 main.go:141] libmachine: (multinode-661456) DBG | Getting to WaitForSSH function...
	I1114 13:59:51.624780   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.625086   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:51.625114   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.625180   29270 main.go:141] libmachine: (multinode-661456) DBG | Using SSH client type: external
	I1114 13:59:51.625220   29270 main.go:141] libmachine: (multinode-661456) DBG | Using SSH private key: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa (-rw-------)
	I1114 13:59:51.625256   29270 main.go:141] libmachine: (multinode-661456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 13:59:51.625266   29270 main.go:141] libmachine: (multinode-661456) DBG | About to run SSH command:
	I1114 13:59:51.625277   29270 main.go:141] libmachine: (multinode-661456) DBG | exit 0
	I1114 13:59:51.721554   29270 main.go:141] libmachine: (multinode-661456) DBG | SSH cmd err, output: <nil>: 
	I1114 13:59:51.721937   29270 main.go:141] libmachine: (multinode-661456) Calling .GetConfigRaw
	I1114 13:59:51.722557   29270 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:59:51.724865   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.725203   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:51.725232   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.725468   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 13:59:51.725684   29270 machine.go:88] provisioning docker machine ...
	I1114 13:59:51.725704   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:51.725887   29270 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:59:51.726090   29270 buildroot.go:166] provisioning hostname "multinode-661456"
	I1114 13:59:51.726110   29270 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:59:51.726233   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:51.728243   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.728537   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:51.728561   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.728695   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:51.728866   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:51.728989   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:51.729116   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:51.729275   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:51.729752   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:51.729772   29270 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-661456 && echo "multinode-661456" | sudo tee /etc/hostname
	I1114 13:59:51.874482   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-661456
	
	I1114 13:59:51.874515   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:51.877235   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.877587   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:51.877616   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:51.877778   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:51.877991   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:51.878187   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:51.878324   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:51.878468   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:51.878782   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:51.878798   29270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-661456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-661456/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-661456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:59:52.017733   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:59:52.017763   29270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17581-6041/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-6041/.minikube}
	I1114 13:59:52.017809   29270 buildroot.go:174] setting up certificates
	I1114 13:59:52.017825   29270 provision.go:83] configureAuth start
	I1114 13:59:52.017843   29270 main.go:141] libmachine: (multinode-661456) Calling .GetMachineName
	I1114 13:59:52.018147   29270 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:59:52.020722   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.021092   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.021112   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.021291   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:52.023399   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.023787   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.023816   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.024014   29270 provision.go:138] copyHostCerts
	I1114 13:59:52.024045   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 13:59:52.024077   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem, removing ...
	I1114 13:59:52.024099   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 13:59:52.024170   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem (1082 bytes)
	I1114 13:59:52.024286   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 13:59:52.024308   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem, removing ...
	I1114 13:59:52.024315   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 13:59:52.024350   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem (1123 bytes)
	I1114 13:59:52.024417   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 13:59:52.024444   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem, removing ...
	I1114 13:59:52.024454   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 13:59:52.024497   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem (1675 bytes)
	I1114 13:59:52.024575   29270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem org=jenkins.multinode-661456 san=[192.168.39.222 192.168.39.222 localhost 127.0.0.1 minikube multinode-661456]
	I1114 13:59:52.170900   29270 provision.go:172] copyRemoteCerts
	I1114 13:59:52.170970   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:59:52.171001   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:52.173384   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.173732   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.173755   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.173945   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:52.174122   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.174263   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:52.174367   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:59:52.267237   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 13:59:52.267312   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 13:59:52.289621   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 13:59:52.289681   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1114 13:59:52.311986   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 13:59:52.312089   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:59:52.335976   29270 provision.go:86] duration metric: configureAuth took 318.136069ms
	I1114 13:59:52.336001   29270 buildroot.go:189] setting minikube options for container-runtime
	I1114 13:59:52.336244   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:59:52.336268   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:52.336524   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:52.338992   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.339411   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.339445   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.339628   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:52.339826   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.339971   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.340097   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:52.340231   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:52.340589   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:52.340609   29270 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1114 13:59:52.475312   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1114 13:59:52.475341   29270 buildroot.go:70] root file system type: tmpfs
	I1114 13:59:52.475487   29270 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1114 13:59:52.475517   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:52.477936   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.478319   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.478354   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.478492   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:52.478695   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.478839   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.478951   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:52.479062   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:52.479375   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:52.479446   29270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1114 13:59:52.622387   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1114 13:59:52.622418   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:52.625014   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.625364   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:52.625389   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:52.625517   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:52.625699   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.625872   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:52.625988   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:52.626114   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:52.626427   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:52.626452   29270 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1114 13:59:53.519116   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1114 13:59:53.519139   29270 machine.go:91] provisioned docker machine in 1.793442475s
	I1114 13:59:53.519150   29270 start.go:300] post-start starting for "multinode-661456" (driver="kvm2")
	I1114 13:59:53.519161   29270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:59:53.519179   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:53.519468   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:59:53.519493   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:53.522470   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.523023   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:53.523048   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.523266   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:53.523467   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:53.523629   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:53.523808   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:59:53.619041   29270 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:59:53.623205   29270 command_runner.go:130] > NAME=Buildroot
	I1114 13:59:53.623225   29270 command_runner.go:130] > VERSION=2021.02.12-1-gccdd192-dirty
	I1114 13:59:53.623230   29270 command_runner.go:130] > ID=buildroot
	I1114 13:59:53.623235   29270 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 13:59:53.623239   29270 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 13:59:53.623419   29270 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 13:59:53.623441   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/addons for local assets ...
	I1114 13:59:53.623511   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/files for local assets ...
	I1114 13:59:53.623604   29270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> 132382.pem in /etc/ssl/certs
	I1114 13:59:53.623617   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /etc/ssl/certs/132382.pem
	I1114 13:59:53.623719   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 13:59:53.631783   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /etc/ssl/certs/132382.pem (1708 bytes)
	I1114 13:59:53.654769   29270 start.go:303] post-start completed in 135.606056ms
	I1114 13:59:53.654792   29270 fix.go:56] fixHost completed within 20.893950691s
	I1114 13:59:53.654815   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:53.656991   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.657299   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:53.657333   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.657446   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:53.657653   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:53.657814   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:53.657951   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:53.658087   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 13:59:53.658419   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.222 22 <nil> <nil>}
	I1114 13:59:53.658430   29270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 13:59:53.790183   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699970393.738967202
	
	I1114 13:59:53.790202   29270 fix.go:206] guest clock: 1699970393.738967202
	I1114 13:59:53.790209   29270 fix.go:219] Guest: 2023-11-14 13:59:53.738967202 +0000 UTC Remote: 2023-11-14 13:59:53.654797178 +0000 UTC m=+21.050336822 (delta=84.170024ms)
	I1114 13:59:53.790226   29270 fix.go:190] guest clock delta is within tolerance: 84.170024ms
	I1114 13:59:53.790231   29270 start.go:83] releasing machines lock for "multinode-661456", held for 21.029399558s
	I1114 13:59:53.790257   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:53.790489   29270 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:59:53.792886   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.793289   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:53.793323   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.793456   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:53.794114   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:53.794296   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:59:53.794372   29270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:59:53.794415   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:53.794516   29270 ssh_runner.go:195] Run: cat /version.json
	I1114 13:59:53.794539   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:59:53.796899   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.797139   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.797282   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:53.797309   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.797481   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:53.797575   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:53.797616   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:53.797662   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:53.797739   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:59:53.797938   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:59:53.797944   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:53.798134   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:59:53.798136   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:59:53.798263   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:59:53.911762   29270 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 13:59:53.912704   29270 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "4770ca4ce6b6c59d35dfae229a3cdfca5570c673"}
	I1114 13:59:53.912848   29270 ssh_runner.go:195] Run: systemctl --version
	I1114 13:59:53.918673   29270 command_runner.go:130] > systemd 247 (247)
	I1114 13:59:53.918706   29270 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1114 13:59:53.918783   29270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:59:53.924066   29270 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 13:59:53.924221   29270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 13:59:53.924287   29270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:59:53.939156   29270 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 13:59:53.939352   29270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 13:59:53.939372   29270 start.go:472] detecting cgroup driver to use...
	I1114 13:59:53.939516   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:59:53.959724   29270 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1114 13:59:53.960182   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 13:59:53.969525   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 13:59:53.978915   29270 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 13:59:53.978987   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 13:59:53.988477   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:59:53.997827   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 13:59:54.007332   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:59:54.017002   29270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:59:54.027000   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 13:59:54.036374   29270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:59:54.044456   29270 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 13:59:54.044579   29270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:59:54.052866   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:59:54.150139   29270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 13:59:54.168714   29270 start.go:472] detecting cgroup driver to use...
	I1114 13:59:54.168808   29270 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1114 13:59:54.180334   29270 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1114 13:59:54.181313   29270 command_runner.go:130] > [Unit]
	I1114 13:59:54.181326   29270 command_runner.go:130] > Description=Docker Application Container Engine
	I1114 13:59:54.181332   29270 command_runner.go:130] > Documentation=https://docs.docker.com
	I1114 13:59:54.181338   29270 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1114 13:59:54.181350   29270 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1114 13:59:54.181358   29270 command_runner.go:130] > StartLimitBurst=3
	I1114 13:59:54.181362   29270 command_runner.go:130] > StartLimitIntervalSec=60
	I1114 13:59:54.181367   29270 command_runner.go:130] > [Service]
	I1114 13:59:54.181371   29270 command_runner.go:130] > Type=notify
	I1114 13:59:54.181377   29270 command_runner.go:130] > Restart=on-failure
	I1114 13:59:54.181385   29270 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1114 13:59:54.181394   29270 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1114 13:59:54.181403   29270 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1114 13:59:54.181412   29270 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1114 13:59:54.181421   29270 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1114 13:59:54.181434   29270 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1114 13:59:54.181447   29270 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1114 13:59:54.181465   29270 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1114 13:59:54.181480   29270 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1114 13:59:54.181486   29270 command_runner.go:130] > ExecStart=
	I1114 13:59:54.181502   29270 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1114 13:59:54.181510   29270 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1114 13:59:54.181519   29270 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1114 13:59:54.181528   29270 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1114 13:59:54.181535   29270 command_runner.go:130] > LimitNOFILE=infinity
	I1114 13:59:54.181542   29270 command_runner.go:130] > LimitNPROC=infinity
	I1114 13:59:54.181549   29270 command_runner.go:130] > LimitCORE=infinity
	I1114 13:59:54.181559   29270 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1114 13:59:54.181569   29270 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1114 13:59:54.181575   29270 command_runner.go:130] > TasksMax=infinity
	I1114 13:59:54.181583   29270 command_runner.go:130] > TimeoutStartSec=0
	I1114 13:59:54.181590   29270 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1114 13:59:54.181596   29270 command_runner.go:130] > Delegate=yes
	I1114 13:59:54.181602   29270 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1114 13:59:54.181606   29270 command_runner.go:130] > KillMode=process
	I1114 13:59:54.181610   29270 command_runner.go:130] > [Install]
	I1114 13:59:54.181633   29270 command_runner.go:130] > WantedBy=multi-user.target
	I1114 13:59:54.182067   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:59:54.195068   29270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 13:59:54.212483   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:59:54.225276   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:59:54.237171   29270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 13:59:54.265927   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:59:54.278144   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:59:54.295010   29270 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1114 13:59:54.295094   29270 ssh_runner.go:195] Run: which cri-dockerd
	I1114 13:59:54.298587   29270 command_runner.go:130] > /usr/bin/cri-dockerd
	I1114 13:59:54.298742   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1114 13:59:54.306494   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1114 13:59:54.322177   29270 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1114 13:59:54.423700   29270 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1114 13:59:54.532131   29270 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1114 13:59:54.532253   29270 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1114 13:59:54.548692   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:59:54.656205   29270 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1114 13:59:56.244036   29270 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.587800147s)
	I1114 13:59:56.244095   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 13:59:56.346800   29270 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1114 13:59:56.459223   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 13:59:56.577517   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:59:56.694360   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1114 13:59:56.713258   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:59:56.825599   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1114 13:59:56.914111   29270 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1114 13:59:56.914179   29270 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1114 13:59:56.919734   29270 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1114 13:59:56.919754   29270 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 13:59:56.919760   29270 command_runner.go:130] > Device: 16h/22d	Inode: 845         Links: 1
	I1114 13:59:56.919767   29270 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1114 13:59:56.919782   29270 command_runner.go:130] > Access: 2023-11-14 13:59:56.787880393 +0000
	I1114 13:59:56.919787   29270 command_runner.go:130] > Modify: 2023-11-14 13:59:56.787880393 +0000
	I1114 13:59:56.919793   29270 command_runner.go:130] > Change: 2023-11-14 13:59:56.791880393 +0000
	I1114 13:59:56.919798   29270 command_runner.go:130] >  Birth: -
	I1114 13:59:56.920131   29270 start.go:540] Will wait 60s for crictl version
	I1114 13:59:56.920187   29270 ssh_runner.go:195] Run: which crictl
	I1114 13:59:56.923938   29270 command_runner.go:130] > /usr/bin/crictl
	I1114 13:59:56.924690   29270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:59:56.979135   29270 command_runner.go:130] > Version:  0.1.0
	I1114 13:59:56.979164   29270 command_runner.go:130] > RuntimeName:  docker
	I1114 13:59:56.979173   29270 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1114 13:59:56.979181   29270 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 13:59:56.979196   29270 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1114 13:59:56.979236   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 13:59:57.005866   29270 command_runner.go:130] > 24.0.7
	I1114 13:59:57.006166   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 13:59:57.031900   29270 command_runner.go:130] > 24.0.7
	I1114 13:59:57.035235   29270 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1114 13:59:57.035268   29270 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:59:57.037821   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:57.038169   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:59:57.038191   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:59:57.038381   29270 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 13:59:57.042152   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:59:57.053446   29270 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 13:59:57.053523   29270 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1114 13:59:57.071661   29270 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1114 13:59:57.071684   29270 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1114 13:59:57.071694   29270 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 13:59:57.071703   29270 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1114 13:59:57.071708   29270 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1114 13:59:57.071713   29270 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1114 13:59:57.071722   29270 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1114 13:59:57.071728   29270 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1114 13:59:57.071737   29270 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:59:57.071749   29270 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1114 13:59:57.072616   29270 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1114 13:59:57.072634   29270 docker.go:601] Images already preloaded, skipping extraction
	I1114 13:59:57.072683   29270 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1114 13:59:57.091100   29270 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.3
	I1114 13:59:57.091128   29270 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.3
	I1114 13:59:57.091141   29270 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.3
	I1114 13:59:57.091150   29270 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.3
	I1114 13:59:57.091158   29270 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1114 13:59:57.091168   29270 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1114 13:59:57.091178   29270 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1114 13:59:57.091186   29270 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1114 13:59:57.091197   29270 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:59:57.091206   29270 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1114 13:59:57.091942   29270 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1114 13:59:57.091962   29270 cache_images.go:84] Images are preloaded, skipping loading
	I1114 13:59:57.092024   29270 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1114 13:59:57.118185   29270 command_runner.go:130] > cgroupfs
	I1114 13:59:57.118388   29270 cni.go:84] Creating CNI manager for ""
	I1114 13:59:57.118402   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 13:59:57.118420   29270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:59:57.118436   29270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.222 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-661456 NodeName:multinode-661456 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 13:59:57.118599   29270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-661456"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:59:57.118675   29270 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-661456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:59:57.118721   29270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 13:59:57.128304   29270 command_runner.go:130] > kubeadm
	I1114 13:59:57.128326   29270 command_runner.go:130] > kubectl
	I1114 13:59:57.128333   29270 command_runner.go:130] > kubelet
	I1114 13:59:57.128416   29270 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:59:57.128487   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:59:57.137208   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1114 13:59:57.152643   29270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 13:59:57.167741   29270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1114 13:59:57.183931   29270 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I1114 13:59:57.187402   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:59:57.198841   29270 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456 for IP: 192.168.39.222
	I1114 13:59:57.198872   29270 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3fe4539ce9ed96ff0e979200082f9548591da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:59:57.199003   29270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key
	I1114 13:59:57.199043   29270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key
	I1114 13:59:57.199105   29270 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key
	I1114 13:59:57.199160   29270 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/apiserver.key.ac9b12d1
	I1114 13:59:57.199193   29270 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/proxy-client.key
	I1114 13:59:57.199203   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 13:59:57.199216   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 13:59:57.199229   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 13:59:57.199240   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 13:59:57.199252   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 13:59:57.199264   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 13:59:57.199275   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 13:59:57.199291   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 13:59:57.199335   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem (1338 bytes)
	W1114 13:59:57.199360   29270 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238_empty.pem, impossibly tiny 0 bytes
	I1114 13:59:57.199371   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem (1679 bytes)
	I1114 13:59:57.199392   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem (1082 bytes)
	I1114 13:59:57.199416   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:59:57.199437   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem (1675 bytes)
	I1114 13:59:57.199476   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem (1708 bytes)
	I1114 13:59:57.199501   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem -> /usr/share/ca-certificates/13238.pem
	I1114 13:59:57.199514   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /usr/share/ca-certificates/132382.pem
	I1114 13:59:57.199526   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:59:57.200056   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:59:57.222840   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 13:59:57.245771   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:59:57.267849   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 13:59:57.289954   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:59:57.312279   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1114 13:59:57.334116   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:59:57.356775   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 13:59:57.379256   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem --> /usr/share/ca-certificates/13238.pem (1338 bytes)
	I1114 13:59:57.402653   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /usr/share/ca-certificates/132382.pem (1708 bytes)
	I1114 13:59:57.424897   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:59:57.446801   29270 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:59:57.462753   29270 ssh_runner.go:195] Run: openssl version
	I1114 13:59:57.467853   29270 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 13:59:57.467914   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13238.pem && ln -fs /usr/share/ca-certificates/13238.pem /etc/ssl/certs/13238.pem"
	I1114 13:59:57.477865   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13238.pem
	I1114 13:59:57.482267   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 13:59:57.482471   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 13:59:57.482521   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13238.pem
	I1114 13:59:57.487658   29270 command_runner.go:130] > 51391683
	I1114 13:59:57.487699   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13238.pem /etc/ssl/certs/51391683.0"
	I1114 13:59:57.497818   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132382.pem && ln -fs /usr/share/ca-certificates/132382.pem /etc/ssl/certs/132382.pem"
	I1114 13:59:57.508568   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132382.pem
	I1114 13:59:57.513024   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 13:59:57.513056   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 13:59:57.513094   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132382.pem
	I1114 13:59:57.518193   29270 command_runner.go:130] > 3ec20f2e
	I1114 13:59:57.518547   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132382.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 13:59:57.527944   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:59:57.538281   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:59:57.542639   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:59:57.542750   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:59:57.542815   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:59:57.548286   29270 command_runner.go:130] > b5213941
	I1114 13:59:57.548361   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:59:57.558831   29270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:59:57.563441   29270 command_runner.go:130] > ca.crt
	I1114 13:59:57.563464   29270 command_runner.go:130] > ca.key
	I1114 13:59:57.563471   29270 command_runner.go:130] > healthcheck-client.crt
	I1114 13:59:57.563478   29270 command_runner.go:130] > healthcheck-client.key
	I1114 13:59:57.563485   29270 command_runner.go:130] > peer.crt
	I1114 13:59:57.563508   29270 command_runner.go:130] > peer.key
	I1114 13:59:57.563516   29270 command_runner.go:130] > server.crt
	I1114 13:59:57.563521   29270 command_runner.go:130] > server.key
	I1114 13:59:57.563573   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 13:59:57.568984   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.569279   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 13:59:57.575295   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.575454   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 13:59:57.581187   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.581251   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 13:59:57.586809   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.586873   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 13:59:57.592540   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.592631   29270 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 13:59:57.598277   29270 command_runner.go:130] > Certificate will not expire
	I1114 13:59:57.598387   29270 kubeadm.go:404] StartCluster: {Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:59:57.598545   29270 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1114 13:59:57.616895   29270 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:59:57.627088   29270 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1114 13:59:57.627109   29270 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1114 13:59:57.627115   29270 command_runner.go:130] > /var/lib/minikube/etcd:
	I1114 13:59:57.627119   29270 command_runner.go:130] > member
	I1114 13:59:57.627273   29270 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 13:59:57.627296   29270 kubeadm.go:636] restartCluster start
	I1114 13:59:57.627346   29270 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 13:59:57.636796   29270 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:57.637222   29270 kubeconfig.go:135] verify returned: extract IP: "multinode-661456" does not appear in /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:59:57.637321   29270 kubeconfig.go:146] "multinode-661456" context is missing from /home/jenkins/minikube-integration/17581-6041/kubeconfig - will repair!
	I1114 13:59:57.637649   29270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-6041/kubeconfig: {Name:mk8c7c760be5355229ff2da52cb7898ad12a909c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:59:57.638094   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:59:57.638304   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:59:57.638890   29270 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 13:59:57.638985   29270 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 13:59:57.650638   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:57.650684   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:57.662350   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:57.662366   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:57.662406   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:57.673647   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:58.174565   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:58.174680   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:58.186971   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:58.674604   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:58.674670   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:58.686731   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:59.174320   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:59.174407   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:59.186040   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 13:59:59.674640   29270 api_server.go:166] Checking apiserver status ...
	I1114 13:59:59.674742   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 13:59:59.686552   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:00.174112   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:00.174203   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:00.186559   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:00.674083   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:00.674189   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:00.686823   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:01.174443   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:01.174524   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:01.186112   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:01.674805   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:01.674893   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:01.687027   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:02.174680   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:02.174774   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:02.186821   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:02.674567   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:02.674643   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:02.686850   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:03.173874   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:03.173965   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:03.185743   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:03.674331   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:03.674397   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:03.686324   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:04.173835   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:04.173915   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:04.185708   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:04.674253   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:04.674329   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:04.686312   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:05.173871   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:05.173974   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:05.185513   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:05.674047   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:05.674135   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:05.686328   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:06.173840   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:06.173941   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:06.185698   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:06.674226   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:06.674327   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:06.687689   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:07.174234   29270 api_server.go:166] Checking apiserver status ...
	I1114 14:00:07.174326   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:00:07.185668   29270 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:00:07.651411   29270 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 14:00:07.651462   29270 kubeadm.go:1128] stopping kube-system containers ...
	I1114 14:00:07.651550   29270 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1114 14:00:07.674140   29270 command_runner.go:130] > feeae8ba9200
	I1114 14:00:07.674165   29270 command_runner.go:130] > bab0d7f33070
	I1114 14:00:07.674171   29270 command_runner.go:130] > 2bd3f30d7d31
	I1114 14:00:07.674177   29270 command_runner.go:130] > e19ac901a161
	I1114 14:00:07.674191   29270 command_runner.go:130] > 8fd11e5a867b
	I1114 14:00:07.674197   29270 command_runner.go:130] > 1f4caae4ccd2
	I1114 14:00:07.674204   29270 command_runner.go:130] > b41d405d05c1
	I1114 14:00:07.674210   29270 command_runner.go:130] > 7d18371e20a9
	I1114 14:00:07.674217   29270 command_runner.go:130] > 0ce64485a2de
	I1114 14:00:07.674225   29270 command_runner.go:130] > 510016ba0c81
	I1114 14:00:07.674232   29270 command_runner.go:130] > 8f35d19c847d
	I1114 14:00:07.674236   29270 command_runner.go:130] > 4037c5756e5b
	I1114 14:00:07.674241   29270 command_runner.go:130] > fc0ae24f94d2
	I1114 14:00:07.674245   29270 command_runner.go:130] > a72669bfefa3
	I1114 14:00:07.674249   29270 command_runner.go:130] > 749b09a9ecc0
	I1114 14:00:07.674257   29270 command_runner.go:130] > 7965b783edc4
	I1114 14:00:07.674275   29270 docker.go:469] Stopping containers: [feeae8ba9200 bab0d7f33070 2bd3f30d7d31 e19ac901a161 8fd11e5a867b 1f4caae4ccd2 b41d405d05c1 7d18371e20a9 0ce64485a2de 510016ba0c81 8f35d19c847d 4037c5756e5b fc0ae24f94d2 a72669bfefa3 749b09a9ecc0 7965b783edc4]
	I1114 14:00:07.674332   29270 ssh_runner.go:195] Run: docker stop feeae8ba9200 bab0d7f33070 2bd3f30d7d31 e19ac901a161 8fd11e5a867b 1f4caae4ccd2 b41d405d05c1 7d18371e20a9 0ce64485a2de 510016ba0c81 8f35d19c847d 4037c5756e5b fc0ae24f94d2 a72669bfefa3 749b09a9ecc0 7965b783edc4
	I1114 14:00:07.696549   29270 command_runner.go:130] > feeae8ba9200
	I1114 14:00:07.696569   29270 command_runner.go:130] > bab0d7f33070
	I1114 14:00:07.696580   29270 command_runner.go:130] > 2bd3f30d7d31
	I1114 14:00:07.696585   29270 command_runner.go:130] > e19ac901a161
	I1114 14:00:07.696592   29270 command_runner.go:130] > 8fd11e5a867b
	I1114 14:00:07.696597   29270 command_runner.go:130] > 1f4caae4ccd2
	I1114 14:00:07.696603   29270 command_runner.go:130] > b41d405d05c1
	I1114 14:00:07.696609   29270 command_runner.go:130] > 7d18371e20a9
	I1114 14:00:07.696617   29270 command_runner.go:130] > 0ce64485a2de
	I1114 14:00:07.696627   29270 command_runner.go:130] > 510016ba0c81
	I1114 14:00:07.696643   29270 command_runner.go:130] > 8f35d19c847d
	I1114 14:00:07.696654   29270 command_runner.go:130] > 4037c5756e5b
	I1114 14:00:07.696662   29270 command_runner.go:130] > fc0ae24f94d2
	I1114 14:00:07.696670   29270 command_runner.go:130] > a72669bfefa3
	I1114 14:00:07.696680   29270 command_runner.go:130] > 749b09a9ecc0
	I1114 14:00:07.696690   29270 command_runner.go:130] > 7965b783edc4
	I1114 14:00:07.696748   29270 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 14:00:07.712860   29270 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:00:07.722010   29270 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1114 14:00:07.722032   29270 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1114 14:00:07.722039   29270 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1114 14:00:07.722053   29270 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:00:07.722095   29270 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:00:07.722145   29270 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:00:07.731064   29270 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 14:00:07.731091   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:07.860165   29270 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:00:07.860189   29270 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1114 14:00:07.860200   29270 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1114 14:00:07.860219   29270 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1114 14:00:07.860233   29270 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1114 14:00:07.860243   29270 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1114 14:00:07.860252   29270 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1114 14:00:07.860262   29270 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1114 14:00:07.860289   29270 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1114 14:00:07.860311   29270 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1114 14:00:07.860326   29270 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1114 14:00:07.860338   29270 command_runner.go:130] > [certs] Using the existing "sa" key
	I1114 14:00:07.860367   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:07.912395   29270 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:00:07.985376   29270 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:00:08.193861   29270 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:00:08.571791   29270 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:00:08.731751   29270 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:00:08.734653   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:08.797654   29270 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:00:08.798753   29270 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:00:08.799000   29270 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 14:00:08.911375   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:09.001244   29270 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:00:09.001267   29270 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:00:09.001277   29270 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:00:09.001284   29270 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:00:09.001313   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:09.073537   29270 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:00:09.077445   29270 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:00:09.077525   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:09.092715   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:09.605965   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:10.105394   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:10.605795   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:11.105977   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:11.161155   29270 command_runner.go:130] > 1637
	I1114 14:00:11.162275   29270 api_server.go:72] duration metric: took 2.084847114s to wait for apiserver process to appear ...
	I1114 14:00:11.162293   29270 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:00:11.162307   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:11.162841   29270 api_server.go:269] stopped: https://192.168.39.222:8443/healthz: Get "https://192.168.39.222:8443/healthz": dial tcp 192.168.39.222:8443: connect: connection refused
	I1114 14:00:11.162871   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:11.163431   29270 api_server.go:269] stopped: https://192.168.39.222:8443/healthz: Get "https://192.168.39.222:8443/healthz": dial tcp 192.168.39.222:8443: connect: connection refused
	I1114 14:00:11.664139   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:14.106663   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 14:00:14.106690   29270 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 14:00:14.106703   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:14.146129   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 14:00:14.146156   29270 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 14:00:14.164378   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:14.213051   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:00:14.213095   29270 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:00:14.663582   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:14.669031   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:00:14.669060   29270 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:00:15.163620   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:15.168774   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:00:15.168803   29270 api_server.go:103] status: https://192.168.39.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:00:15.663985   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:15.673671   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I1114 14:00:15.673765   29270 round_trippers.go:463] GET https://192.168.39.222:8443/version
	I1114 14:00:15.673777   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:15.673792   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:15.673804   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:15.686407   29270 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1114 14:00:15.686428   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:15.686438   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:15.686445   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:15.686451   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:15.686458   29270 round_trippers.go:580]     Content-Length: 264
	I1114 14:00:15.686465   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:15 GMT
	I1114 14:00:15.686472   29270 round_trippers.go:580]     Audit-Id: c5fb1035-ec8e-487a-8962-d3e5978efc70
	I1114 14:00:15.686478   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:15.686503   29270 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1114 14:00:15.686592   29270 api_server.go:141] control plane version: v1.28.3
	I1114 14:00:15.686613   29270 api_server.go:131] duration metric: took 4.524313863s to wait for apiserver health ...
	I1114 14:00:15.686624   29270 cni.go:84] Creating CNI manager for ""
	I1114 14:00:15.686631   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 14:00:15.688403   29270 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 14:00:15.689697   29270 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 14:00:15.697888   29270 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 14:00:15.697912   29270 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 14:00:15.697922   29270 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 14:00:15.697936   29270 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:00:15.697956   29270 command_runner.go:130] > Access: 2023-11-14 13:59:45.502880393 +0000
	I1114 14:00:15.697967   29270 command_runner.go:130] > Modify: 2023-11-11 02:04:07.000000000 +0000
	I1114 14:00:15.697976   29270 command_runner.go:130] > Change: 2023-11-14 13:59:43.657880393 +0000
	I1114 14:00:15.697985   29270 command_runner.go:130] >  Birth: -
	I1114 14:00:15.698391   29270 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 14:00:15.698410   29270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 14:00:15.727760   29270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 14:00:16.710686   29270 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:00:16.715071   29270 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:00:16.718990   29270 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 14:00:16.735966   29270 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 14:00:16.739118   29270 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.011324585s)
	I1114 14:00:16.739148   29270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:00:16.739261   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:16.739273   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:16.739284   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:16.739295   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:16.744231   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:00:16.744256   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:16.744267   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:16.744283   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:16.744297   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:16.744317   29270 round_trippers.go:580]     Audit-Id: b6030e23-60f1-4609-b8a3-c7cc3f082947
	I1114 14:00:16.744325   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:16.744333   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:16.746016   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84391 chars]
	I1114 14:00:16.749981   29270 system_pods.go:59] 12 kube-system pods found
	I1114 14:00:16.750006   29270 system_pods.go:61] "coredns-5dd5756b68-kvb7v" [b9c9a98f-d025-408a-ada2-0c19a356b4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:00:16.750013   29270 system_pods.go:61] "etcd-multinode-661456" [a7fc10f1-0274-4c69-9ce0-a962bdfb4e17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 14:00:16.750018   29270 system_pods.go:61] "kindnet-8rqgf" [ed122448-0139-4302-af28-f3d0c97ee881] Running
	I1114 14:00:16.750025   29270 system_pods.go:61] "kindnet-9nvmm" [72d108d5-7995-4aad-9584-f1250560bfa3] Running
	I1114 14:00:16.750030   29270 system_pods.go:61] "kindnet-fjpnd" [1b3a02d4-aa80-421c-8beb-fcc512379320] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1114 14:00:16.750035   29270 system_pods.go:61] "kube-apiserver-multinode-661456" [85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 14:00:16.750044   29270 system_pods.go:61] "kube-controller-manager-multinode-661456" [503c91d5-280b-44ab-8801-da2418e2bf6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 14:00:16.750049   29270 system_pods.go:61] "kube-proxy-fkj7d" [5d920620-7354-4418-a44e-c7f2965d75a4] Running
	I1114 14:00:16.750053   29270 system_pods.go:61] "kube-proxy-ndrhk" [a11d15a6-5476-429f-ae29-445fa22f70dd] Running
	I1114 14:00:16.750057   29270 system_pods.go:61] "kube-proxy-r9r5l" [27ff4b01-cd10-4c7f-99c2-a0fe362d11ad] Running
	I1114 14:00:16.750061   29270 system_pods.go:61] "kube-scheduler-multinode-661456" [16644b7a-7227-47b7-a06e-94b4dd7b0cce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 14:00:16.750078   29270 system_pods.go:61] "storage-provisioner" [dcfebdb5-371b-432a-ace2-d120fdd17f5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:00:16.750083   29270 system_pods.go:74] duration metric: took 10.929788ms to wait for pod list to return data ...
	I1114 14:00:16.750090   29270 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:00:16.750143   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I1114 14:00:16.750151   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:16.750158   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:16.750163   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:16.752759   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:16.752774   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:16.752781   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:16.752795   29270 round_trippers.go:580]     Audit-Id: 0be2299b-145c-4b10-ab90-611c3dccbeec
	I1114 14:00:16.752804   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:16.752813   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:16.752820   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:16.752828   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:16.753108   29270 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13671 chars]
	I1114 14:00:16.753854   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:16.753875   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:16.753885   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:16.753893   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:16.753899   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:16.753903   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:16.753906   29270 node_conditions.go:105] duration metric: took 3.80968ms to run NodePressure ...
	I1114 14:00:16.753923   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:00:16.911515   29270 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1114 14:00:16.982873   29270 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1114 14:00:16.985244   29270 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1114 14:00:16.985377   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1114 14:00:16.985388   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:16.985396   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:16.985402   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:16.990741   29270 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 14:00:16.990763   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:16.990773   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:16.990782   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:16.990789   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:16.990796   29270 round_trippers.go:580]     Audit-Id: e05b7a11-cece-462e-b037-aaaa7e89766c
	I1114 14:00:16.990810   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:16.990822   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:16.991226   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"etcd-multinode-661456","namespace":"kube-system","uid":"a7fc10f1-0274-4c69-9ce0-a962bdfb4e17","resourceVersion":"783","creationTimestamp":"2023-11-14T13:53:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.mirror":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.seen":"2023-11-14T13:53:24.984306855Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 29788 chars]
	I1114 14:00:16.992207   29270 kubeadm.go:787] kubelet initialised
	I1114 14:00:16.992228   29270 kubeadm.go:788] duration metric: took 6.961071ms waiting for restarted kubelet to initialise ...
	I1114 14:00:16.992238   29270 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:00:16.992299   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:16.992310   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:16.992320   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:16.992331   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:16.995713   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:16.995734   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:16.995742   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:16.995748   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:16.995753   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:16.995758   29270 round_trippers.go:580]     Audit-Id: d8bd096e-bc08-4572-8024-2af67a8d0228
	I1114 14:00:16.995763   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:16.995771   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:16.996854   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"817"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 84391 chars]
	I1114 14:00:16.999329   29270 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:16.999403   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:16.999411   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:16.999419   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:16.999424   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.002029   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.002043   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.002049   29270 round_trippers.go:580]     Audit-Id: b4e6e13b-bd82-48d9-bab4-b0bb91077f92
	I1114 14:00:17.002055   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.002062   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.002067   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.002072   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.002077   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.002377   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:17.002839   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.002853   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.002860   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.002866   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.006619   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:17.006634   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.006640   29270 round_trippers.go:580]     Audit-Id: 0fb84570-2c68-4066-93ff-de2d2db94941
	I1114 14:00:17.006645   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.006650   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.006655   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.006662   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.006675   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.006800   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:17.007067   29270 pod_ready.go:97] node "multinode-661456" hosting pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.007082   29270 pod_ready.go:81] duration metric: took 7.733287ms waiting for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:17.007090   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.007099   29270 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.007139   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-661456
	I1114 14:00:17.007147   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.007153   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.007158   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.009718   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.009732   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.009739   29270 round_trippers.go:580]     Audit-Id: 73d7173b-d1f9-413d-a36d-4719998eab36
	I1114 14:00:17.009744   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.009749   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.009753   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.009758   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.009763   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.009917   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-661456","namespace":"kube-system","uid":"a7fc10f1-0274-4c69-9ce0-a962bdfb4e17","resourceVersion":"783","creationTimestamp":"2023-11-14T13:53:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.mirror":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.seen":"2023-11-14T13:53:24.984306855Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6305 chars]
	I1114 14:00:17.010299   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.010312   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.010319   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.010325   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.019579   29270 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1114 14:00:17.019596   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.019602   29270 round_trippers.go:580]     Audit-Id: 32064946-bb24-4589-b827-9ad3ecc176e6
	I1114 14:00:17.019607   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.019612   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.019617   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.019622   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.019627   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.019734   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:17.019999   29270 pod_ready.go:97] node "multinode-661456" hosting pod "etcd-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.020018   29270 pod_ready.go:81] duration metric: took 12.913703ms waiting for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:17.020026   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "etcd-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.020041   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.020086   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-661456
	I1114 14:00:17.020094   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.020100   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.020108   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.029817   29270 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1114 14:00:17.029834   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.029841   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.029847   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.029852   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.029857   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.029862   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.029867   29270 round_trippers.go:580]     Audit-Id: 73c4bcc7-4a64-44fb-b744-aa33c6ef1078
	I1114 14:00:17.030229   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-661456","namespace":"kube-system","uid":"85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e","resourceVersion":"782","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.mirror":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.seen":"2023-11-14T13:53:33.091288385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7859 chars]
	I1114 14:00:17.030614   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.030626   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.030632   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.030638   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.033380   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.033398   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.033406   29270 round_trippers.go:580]     Audit-Id: abe3470c-28e7-41d2-b1b6-76a34135407e
	I1114 14:00:17.033414   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.033422   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.033441   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.033451   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.033464   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.033682   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:17.033959   29270 pod_ready.go:97] node "multinode-661456" hosting pod "kube-apiserver-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.033977   29270 pod_ready.go:81] duration metric: took 13.928247ms waiting for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:17.033985   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "kube-apiserver-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.033992   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.034044   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-661456
	I1114 14:00:17.034054   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.034061   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.034069   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.038336   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:00:17.038355   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.038361   29270 round_trippers.go:580]     Audit-Id: b0d6549b-0637-4cf3-afed-b671813de673
	I1114 14:00:17.038367   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.038371   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.038376   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.038382   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.038391   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:16 GMT
	I1114 14:00:17.038543   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-661456","namespace":"kube-system","uid":"503c91d5-280b-44ab-8801-da2418e2bf6c","resourceVersion":"787","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.mirror":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.seen":"2023-11-14T13:53:33.091289647Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7440 chars]
	I1114 14:00:17.140302   29270 request.go:629] Waited for 101.285762ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.140380   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.140386   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.140394   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.140404   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.142841   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.142861   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.142868   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.142876   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:17 GMT
	I1114 14:00:17.142884   29270 round_trippers.go:580]     Audit-Id: 383b6e5e-89bf-4e26-ac40-a0a566d0edb6
	I1114 14:00:17.142892   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.142899   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.142907   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.143027   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:17.143327   29270 pod_ready.go:97] node "multinode-661456" hosting pod "kube-controller-manager-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.143344   29270 pod_ready.go:81] duration metric: took 109.34492ms waiting for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:17.143353   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "kube-controller-manager-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.143359   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.339862   29270 request.go:629] Waited for 196.414422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:00:17.339939   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:00:17.339947   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.339955   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.339964   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.342665   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.342688   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.342699   29270 round_trippers.go:580]     Audit-Id: 733bf621-8014-4a1e-b291-6eb035668cc3
	I1114 14:00:17.342707   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.342715   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.342724   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.342734   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.342742   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:17 GMT
	I1114 14:00:17.342900   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fkj7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"5d920620-7354-4418-a44e-c7f2965d75a4","resourceVersion":"541","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5544 chars]
	I1114 14:00:17.539759   29270 request.go:629] Waited for 196.402196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:00:17.539844   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:00:17.539850   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.539862   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.539874   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.542233   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.542252   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.542259   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.542264   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.542273   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.542281   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.542289   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:17 GMT
	I1114 14:00:17.542298   29270 round_trippers.go:580]     Audit-Id: ca5a5da1-00ac-4592-ad03-37e76db8d613
	I1114 14:00:17.542432   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"4c94b1f0-8936-49a1-a31c-f75b72563ea3","resourceVersion":"606","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3267 chars]
	I1114 14:00:17.542722   29270 pod_ready.go:92] pod "kube-proxy-fkj7d" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:17.542740   29270 pod_ready.go:81] duration metric: took 399.374621ms waiting for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.542752   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:17.740126   29270 request.go:629] Waited for 197.304359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:00:17.740185   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:00:17.740192   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.740201   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.740210   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.743238   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:17.743259   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.743265   29270 round_trippers.go:580]     Audit-Id: bd30dad2-1964-4c83-8e41-c19dac576938
	I1114 14:00:17.743271   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.743276   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.743284   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.743289   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.743294   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:17 GMT
	I1114 14:00:17.743475   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ndrhk","generateName":"kube-proxy-","namespace":"kube-system","uid":"a11d15a6-5476-429f-ae29-445fa22f70dd","resourceVersion":"794","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1114 14:00:17.940276   29270 request.go:629] Waited for 196.357084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.940334   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:17.940341   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:17.940350   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:17.940358   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:17.942994   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:17.943013   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:17.943020   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:17.943025   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:17.943030   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:17 GMT
	I1114 14:00:17.943035   29270 round_trippers.go:580]     Audit-Id: 1e71e9c6-f4ae-466e-b1f3-180718332082
	I1114 14:00:17.943040   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:17.943045   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:17.943428   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:17.943743   29270 pod_ready.go:97] node "multinode-661456" hosting pod "kube-proxy-ndrhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.943767   29270 pod_ready.go:81] duration metric: took 401.007481ms waiting for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:17.943778   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "kube-proxy-ndrhk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:17.943787   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:18.140205   29270 request.go:629] Waited for 196.352323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:00:18.140267   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:00:18.140272   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.140279   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.140285   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.143272   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:18.143298   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.143308   29270 round_trippers.go:580]     Audit-Id: d690dd13-db63-4bbc-910f-016dd6d8e10a
	I1114 14:00:18.143317   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.143324   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.143332   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.143340   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.143348   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.143843   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r9r5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"27ff4b01-cd10-4c7f-99c2-a0fe362d11ad","resourceVersion":"762","creationTimestamp":"2023-11-14T13:55:39Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:55:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5745 chars]
	I1114 14:00:18.339639   29270 request.go:629] Waited for 195.354323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:00:18.339702   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:00:18.339707   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.339715   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.339720   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.342494   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:18.342518   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.342527   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.342540   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.342547   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.342561   29270 round_trippers.go:580]     Audit-Id: 889241bd-8ae1-4e61-a2c3-b19858c9f28d
	I1114 14:00:18.342567   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.342578   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.342705   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"de2fa03b-c47f-4331-a423-2475d21c15ba","resourceVersion":"774","creationTimestamp":"2023-11-14T13:56:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I1114 14:00:18.342975   29270 pod_ready.go:92] pod "kube-proxy-r9r5l" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:18.342995   29270 pod_ready.go:81] duration metric: took 399.193459ms waiting for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:18.343008   29270 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:18.539349   29270 request.go:629] Waited for 196.274096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:00:18.539399   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:00:18.539403   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.539423   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.539432   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.542061   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:18.542081   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.542090   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.542098   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.542109   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.542116   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.542128   29270 round_trippers.go:580]     Audit-Id: 148b3ab9-1937-4431-9963-43044de30034
	I1114 14:00:18.542137   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.542347   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-661456","namespace":"kube-system","uid":"16644b7a-7227-47b7-a06e-94b4dd7b0cce","resourceVersion":"786","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.mirror":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.seen":"2023-11-14T13:53:33.091290734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5152 chars]
	I1114 14:00:18.740113   29270 request.go:629] Waited for 197.391895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:18.740171   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:18.740178   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.740186   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.740195   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.743516   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:18.743546   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.743559   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.743569   29270 round_trippers.go:580]     Audit-Id: c105ab92-0089-4a11-a688-4966a3bb6c11
	I1114 14:00:18.743580   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.743588   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.743595   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.743608   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.743728   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:18.744194   29270 pod_ready.go:97] node "multinode-661456" hosting pod "kube-scheduler-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:18.744223   29270 pod_ready.go:81] duration metric: took 401.207805ms waiting for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	E1114 14:00:18.744235   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456" hosting pod "kube-scheduler-multinode-661456" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456" has status "Ready":"False"
	I1114 14:00:18.744252   29270 pod_ready.go:38] duration metric: took 1.752005951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:00:18.744267   29270 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:00:18.762971   29270 command_runner.go:130] > -16
	I1114 14:00:18.763088   29270 ops.go:34] apiserver oom_adj: -16
	I1114 14:00:18.763102   29270 kubeadm.go:640] restartCluster took 21.135800904s
	I1114 14:00:18.763110   29270 kubeadm.go:406] StartCluster complete in 21.164760164s
	I1114 14:00:18.763123   29270 settings.go:142] acquiring lock: {Name:mk142f790b9a645b9d961649a46a96b1fe4e46d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:00:18.763188   29270 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:00:18.763811   29270 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-6041/kubeconfig: {Name:mk8c7c760be5355229ff2da52cb7898ad12a909c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:00:18.764007   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:00:18.764169   29270 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 14:00:18.764304   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:00:18.764320   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:00:18.767281   29270 out.go:177] * Enabled addons: 
	I1114 14:00:18.764584   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:00:18.768951   29270 addons.go:502] enable addons completed in 4.774456ms: enabled=[]
	I1114 14:00:18.767710   29270 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:00:18.768994   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.769004   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.769016   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.771884   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:18.771907   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.771917   29270 round_trippers.go:580]     Audit-Id: 9185d5a7-1232-4ea1-9f06-c714054192b9
	I1114 14:00:18.771926   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.771934   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.771941   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.771950   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.771959   29270 round_trippers.go:580]     Content-Length: 291
	I1114 14:00:18.771967   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.772004   29270 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9d8407fd-076f-444c-a235-0048e6022d7e","resourceVersion":"805","creationTimestamp":"2023-11-14T13:53:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 14:00:18.772216   29270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-661456" context rescaled to 1 replicas
	I1114 14:00:18.772251   29270 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1114 14:00:18.774189   29270 out.go:177] * Verifying Kubernetes components...
	I1114 14:00:18.776247   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:00:18.876514   29270 command_runner.go:130] > apiVersion: v1
	I1114 14:00:18.876538   29270 command_runner.go:130] > data:
	I1114 14:00:18.876542   29270 command_runner.go:130] >   Corefile: |
	I1114 14:00:18.876546   29270 command_runner.go:130] >     .:53 {
	I1114 14:00:18.876550   29270 command_runner.go:130] >         log
	I1114 14:00:18.876555   29270 command_runner.go:130] >         errors
	I1114 14:00:18.876558   29270 command_runner.go:130] >         health {
	I1114 14:00:18.876563   29270 command_runner.go:130] >            lameduck 5s
	I1114 14:00:18.876566   29270 command_runner.go:130] >         }
	I1114 14:00:18.876570   29270 command_runner.go:130] >         ready
	I1114 14:00:18.876575   29270 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1114 14:00:18.876579   29270 command_runner.go:130] >            pods insecure
	I1114 14:00:18.876585   29270 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1114 14:00:18.876589   29270 command_runner.go:130] >            ttl 30
	I1114 14:00:18.876592   29270 command_runner.go:130] >         }
	I1114 14:00:18.876596   29270 command_runner.go:130] >         prometheus :9153
	I1114 14:00:18.876602   29270 command_runner.go:130] >         hosts {
	I1114 14:00:18.876610   29270 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1114 14:00:18.876621   29270 command_runner.go:130] >            fallthrough
	I1114 14:00:18.876631   29270 command_runner.go:130] >         }
	I1114 14:00:18.876639   29270 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1114 14:00:18.876647   29270 command_runner.go:130] >            max_concurrent 1000
	I1114 14:00:18.876654   29270 command_runner.go:130] >         }
	I1114 14:00:18.876661   29270 command_runner.go:130] >         cache 30
	I1114 14:00:18.876669   29270 command_runner.go:130] >         loop
	I1114 14:00:18.876676   29270 command_runner.go:130] >         reload
	I1114 14:00:18.876686   29270 command_runner.go:130] >         loadbalance
	I1114 14:00:18.876692   29270 command_runner.go:130] >     }
	I1114 14:00:18.876699   29270 command_runner.go:130] > kind: ConfigMap
	I1114 14:00:18.876705   29270 command_runner.go:130] > metadata:
	I1114 14:00:18.876713   29270 command_runner.go:130] >   creationTimestamp: "2023-11-14T13:53:32Z"
	I1114 14:00:18.876718   29270 command_runner.go:130] >   name: coredns
	I1114 14:00:18.876726   29270 command_runner.go:130] >   namespace: kube-system
	I1114 14:00:18.876734   29270 command_runner.go:130] >   resourceVersion: "381"
	I1114 14:00:18.876743   29270 command_runner.go:130] >   uid: b278b765-89e6-4909-84ac-466295857425
	I1114 14:00:18.876794   29270 node_ready.go:35] waiting up to 6m0s for node "multinode-661456" to be "Ready" ...
	I1114 14:00:18.877012   29270 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 14:00:18.940097   29270 request.go:629] Waited for 63.224433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:18.940172   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:18.940181   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:18.940188   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:18.940194   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:18.942774   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:18.942800   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:18.942813   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:18.942822   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:18.942831   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:18 GMT
	I1114 14:00:18.942839   29270 round_trippers.go:580]     Audit-Id: a8df9aa8-b836-4179-b0d5-457e2bdb4fcb
	I1114 14:00:18.942847   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:18.942854   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:18.943032   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:19.139828   29270 request.go:629] Waited for 196.344387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:19.139919   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:19.139928   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:19.139939   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:19.139957   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:19.142402   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:19.142425   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:19.142434   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:19.142441   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:19.142449   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:19.142459   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:19 GMT
	I1114 14:00:19.142480   29270 round_trippers.go:580]     Audit-Id: 98e3364f-eec1-40c7-aecb-8b92753b7782
	I1114 14:00:19.142489   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:19.142634   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:19.643736   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:19.643760   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:19.643768   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:19.643775   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:19.646620   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:19.646645   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:19.646655   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:19.646663   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:19 GMT
	I1114 14:00:19.646670   29270 round_trippers.go:580]     Audit-Id: 239e0933-e9ba-4763-b607-77acce61cc13
	I1114 14:00:19.646677   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:19.646685   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:19.646693   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:19.646973   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:20.143634   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:20.143660   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:20.143670   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:20.143676   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:20.146419   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:20.146446   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:20.146455   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:20.146463   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:20.146470   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:20.146478   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:20 GMT
	I1114 14:00:20.146486   29270 round_trippers.go:580]     Audit-Id: 7e2ab65f-7252-4215-a955-8b0bcaf00c8c
	I1114 14:00:20.146495   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:20.146996   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:20.643771   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:20.643796   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:20.643804   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:20.643810   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:20.646702   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:20.646731   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:20.646742   29270 round_trippers.go:580]     Audit-Id: 37ea2d3b-1ec9-4d31-82bd-1b4361cfd755
	I1114 14:00:20.646759   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:20.646769   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:20.646779   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:20.646788   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:20.646805   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:20 GMT
	I1114 14:00:20.647377   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:21.144096   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:21.144125   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:21.144133   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:21.144140   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:21.147225   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:21.147250   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:21.147260   29270 round_trippers.go:580]     Audit-Id: 39410ad1-f267-4cbc-88be-865bf2b10d2f
	I1114 14:00:21.147268   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:21.147276   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:21.147282   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:21.147291   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:21.147297   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:21 GMT
	I1114 14:00:21.147507   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:21.147813   29270 node_ready.go:58] node "multinode-661456" has status "Ready":"False"
	I1114 14:00:21.644091   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:21.644128   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:21.644139   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:21.644148   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:21.648138   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:21.648159   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:21.648170   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:21.648180   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:21 GMT
	I1114 14:00:21.648204   29270 round_trippers.go:580]     Audit-Id: db42cfcc-4b8d-44af-92ec-d81041a26323
	I1114 14:00:21.648213   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:21.648218   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:21.648230   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:21.649335   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:22.144022   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:22.144052   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:22.144065   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:22.144075   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:22.147997   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:22.148019   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:22.148026   29270 round_trippers.go:580]     Audit-Id: b69a3bd5-e467-4695-8615-d0f9253ebb67
	I1114 14:00:22.148032   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:22.148037   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:22.148042   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:22.148049   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:22.148058   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:22 GMT
	I1114 14:00:22.148316   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:22.643372   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:22.643401   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:22.643414   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:22.643430   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:22.646170   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:22.646193   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:22.646200   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:22.646205   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:22.646219   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:22.646224   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:22 GMT
	I1114 14:00:22.646229   29270 round_trippers.go:580]     Audit-Id: a12c8cfb-b910-4ffb-b57e-936ecca524ee
	I1114 14:00:22.646238   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:22.646832   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:23.143098   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:23.143131   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:23.143143   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:23.143152   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:23.146910   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:23.146928   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:23.146935   29270 round_trippers.go:580]     Audit-Id: ced78aa5-399d-4ddd-906d-4d351d9cc84e
	I1114 14:00:23.146941   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:23.146946   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:23.146950   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:23.146956   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:23.146967   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:23 GMT
	I1114 14:00:23.147394   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:23.644134   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:23.644161   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:23.644172   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:23.644178   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:23.646610   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:23.646635   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:23.646645   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:23.646653   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:23.646661   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:23 GMT
	I1114 14:00:23.646669   29270 round_trippers.go:580]     Audit-Id: 25246d6c-3d7d-44df-a0e1-6b23ed0de841
	I1114 14:00:23.646675   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:23.646688   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:23.646970   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:23.647268   29270 node_ready.go:58] node "multinode-661456" has status "Ready":"False"
	I1114 14:00:24.143765   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:24.143788   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.143796   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.143802   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.146811   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:24.146831   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.146837   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.146843   29270 round_trippers.go:580]     Audit-Id: 7443bc22-0601-4e78-8d56-3f15ee15d2b7
	I1114 14:00:24.146848   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.146862   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.146875   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.146889   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.147085   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"784","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5284 chars]
	I1114 14:00:24.643781   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:24.643806   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.643814   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.643820   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.646824   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:24.646850   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.646860   29270 round_trippers.go:580]     Audit-Id: bfd3c329-f5f0-4209-88c1-1259f4ad19f8
	I1114 14:00:24.646865   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.646870   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.646926   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.646938   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.646943   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.647618   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:24.647895   29270 node_ready.go:49] node "multinode-661456" has status "Ready":"True"
	I1114 14:00:24.647912   29270 node_ready.go:38] duration metric: took 5.771097839s waiting for node "multinode-661456" to be "Ready" ...
	I1114 14:00:24.647920   29270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:00:24.647975   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:24.647983   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.647990   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.647995   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.655581   29270 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1114 14:00:24.655606   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.655615   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.655624   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.655632   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.655640   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.655652   29270 round_trippers.go:580]     Audit-Id: 57fab3ce-e9c9-4b98-bdcb-753cfeba1c8c
	I1114 14:00:24.655665   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.657445   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"874"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83991 chars]
	I1114 14:00:24.660476   29270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:24.660566   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:24.660577   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.660587   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.660596   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.663809   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:24.663829   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.663839   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.663848   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.663864   29270 round_trippers.go:580]     Audit-Id: 0862283b-1465-4734-ae90-5af976468b5f
	I1114 14:00:24.663876   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.663885   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.663893   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.664571   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:24.665103   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:24.665121   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.665131   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.665142   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.667294   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:24.667311   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.667320   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.667329   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.667342   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.667353   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.667367   29270 round_trippers.go:580]     Audit-Id: 913e6bdb-6e05-4ccb-9c98-4c1c2d5dc85e
	I1114 14:00:24.667386   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.667596   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:24.668025   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:24.668047   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.668059   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.668072   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.670646   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:24.670662   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.670670   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.670678   29270 round_trippers.go:580]     Audit-Id: 45c19c67-3dc5-4e06-b752-70fe65d3daec
	I1114 14:00:24.670687   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.670702   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.670711   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.670723   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.670975   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:24.671557   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:24.671577   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:24.671587   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:24.671600   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:24.673392   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:00:24.673407   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:24.673414   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:24 GMT
	I1114 14:00:24.673420   29270 round_trippers.go:580]     Audit-Id: f0d01277-353c-4364-9db0-d8acdd8a8d1a
	I1114 14:00:24.673424   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:24.673455   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:24.673468   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:24.673486   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:24.673649   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:25.174637   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:25.174662   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:25.174670   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:25.174675   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:25.177053   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:25.177077   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:25.177096   29270 round_trippers.go:580]     Audit-Id: 09820f79-5349-476e-bcf6-6d546e137177
	I1114 14:00:25.177104   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:25.177112   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:25.177124   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:25.177131   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:25.177141   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:25 GMT
	I1114 14:00:25.177300   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:25.177853   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:25.177876   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:25.177883   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:25.177889   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:25.180002   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:25.180015   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:25.180021   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:25.180026   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:25.180031   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:25.180036   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:25 GMT
	I1114 14:00:25.180041   29270 round_trippers.go:580]     Audit-Id: 2893125f-d892-4f66-b5e7-96fb23a50cb3
	I1114 14:00:25.180046   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:25.180566   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:25.674226   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:25.674253   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:25.674262   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:25.674267   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:25.676854   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:25.676873   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:25.676882   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:25 GMT
	I1114 14:00:25.676887   29270 round_trippers.go:580]     Audit-Id: 34985f60-8d3f-4967-977a-7d6aa5fbbcb3
	I1114 14:00:25.676892   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:25.676897   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:25.676902   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:25.676907   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:25.677116   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:25.677605   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:25.677620   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:25.677627   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:25.677632   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:25.679966   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:25.679982   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:25.679988   29270 round_trippers.go:580]     Audit-Id: 393dfe80-1907-4f81-be0f-bb310a67deda
	I1114 14:00:25.679993   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:25.679998   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:25.680011   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:25.680016   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:25.680024   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:25 GMT
	I1114 14:00:25.680272   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:26.174926   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:26.174954   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:26.174961   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:26.174967   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:26.177607   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:26.177633   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:26.177648   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:26.177656   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:26.177663   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:26 GMT
	I1114 14:00:26.177670   29270 round_trippers.go:580]     Audit-Id: cba08a42-333c-4095-ab74-2e5c75cfaee6
	I1114 14:00:26.177677   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:26.177685   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:26.178137   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:26.178660   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:26.178675   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:26.178686   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:26.178695   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:26.181042   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:26.181062   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:26.181072   29270 round_trippers.go:580]     Audit-Id: 6c163ef5-069d-4a10-b810-a39c5ad925ed
	I1114 14:00:26.181080   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:26.181088   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:26.181105   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:26.181114   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:26.181134   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:26 GMT
	I1114 14:00:26.181300   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:26.674675   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:26.674705   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:26.674718   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:26.674728   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:26.681750   29270 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1114 14:00:26.681777   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:26.681789   29270 round_trippers.go:580]     Audit-Id: 73945dc1-9815-4707-9525-5c4bea07f720
	I1114 14:00:26.681799   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:26.681806   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:26.681811   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:26.681816   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:26.681821   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:26 GMT
	I1114 14:00:26.686977   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:26.687511   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:26.687526   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:26.687536   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:26.687549   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:26.693667   29270 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:00:26.693686   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:26.693695   29270 round_trippers.go:580]     Audit-Id: 0d4c3545-6a90-4fc9-a962-9fcd3c37714d
	I1114 14:00:26.693704   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:26.693713   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:26.693726   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:26.693739   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:26.693747   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:26 GMT
	I1114 14:00:26.693914   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:26.694296   29270 pod_ready.go:102] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"False"
	I1114 14:00:27.174482   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:27.174513   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:27.174522   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:27.174528   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:27.177132   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:27.177154   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:27.177163   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:27.177171   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:27.177178   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:27.177185   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:27 GMT
	I1114 14:00:27.177195   29270 round_trippers.go:580]     Audit-Id: c5102d9d-c2f2-4086-acf1-261556635a1f
	I1114 14:00:27.177206   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:27.177469   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:27.178028   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:27.178046   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:27.178057   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:27.178067   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:27.180106   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:27.180120   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:27.180126   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:27.180131   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:27.180136   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:27 GMT
	I1114 14:00:27.180140   29270 round_trippers.go:580]     Audit-Id: 9c409dcd-4df4-4ed5-ae62-5304e014e0bf
	I1114 14:00:27.180145   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:27.180150   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:27.180307   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:27.674466   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:27.674493   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:27.674508   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:27.674518   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:27.677387   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:27.677414   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:27.677424   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:27.677449   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:27 GMT
	I1114 14:00:27.677461   29270 round_trippers.go:580]     Audit-Id: 0ce8a86c-1648-4483-8ac7-0ff1706e3a2e
	I1114 14:00:27.677469   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:27.677478   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:27.677483   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:27.678143   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:27.678563   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:27.678575   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:27.678582   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:27.678588   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:27.681186   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:27.681204   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:27.681210   29270 round_trippers.go:580]     Audit-Id: 1a280dae-fdc7-44be-9f22-d8bf7647f2d0
	I1114 14:00:27.681215   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:27.681220   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:27.681228   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:27.681240   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:27.681258   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:27 GMT
	I1114 14:00:27.681385   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:28.174454   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:28.174481   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:28.174488   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:28.174494   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:28.177281   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:28.177301   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:28.177308   29270 round_trippers.go:580]     Audit-Id: d295932a-3d51-4539-a58a-d16c5e9cc575
	I1114 14:00:28.177313   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:28.177318   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:28.177323   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:28.177331   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:28.177335   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:28 GMT
	I1114 14:00:28.177513   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:28.177976   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:28.177992   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:28.177999   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:28.178005   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:28.180316   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:28.180335   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:28.180344   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:28.180351   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:28 GMT
	I1114 14:00:28.180358   29270 round_trippers.go:580]     Audit-Id: 839780f2-100f-4653-a340-22f9242c8e73
	I1114 14:00:28.180365   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:28.180373   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:28.180382   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:28.180638   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:28.674301   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:28.674328   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:28.674336   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:28.674341   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:28.677108   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:28.677134   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:28.677146   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:28.677153   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:28 GMT
	I1114 14:00:28.677162   29270 round_trippers.go:580]     Audit-Id: 250263c3-664f-4742-a9a8-4b8a05196ae4
	I1114 14:00:28.677167   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:28.677172   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:28.677177   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:28.677385   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:28.677997   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:28.678018   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:28.678029   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:28.678042   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:28.680511   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:28.680533   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:28.680542   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:28.680550   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:28.680558   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:28 GMT
	I1114 14:00:28.680566   29270 round_trippers.go:580]     Audit-Id: e25b1546-9d47-4a98-a436-ecfefe26f2e6
	I1114 14:00:28.680573   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:28.680581   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:28.680954   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:29.174619   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:29.174644   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:29.174652   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:29.174661   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:29.177500   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:29.177525   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:29.177536   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:29.177544   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:29 GMT
	I1114 14:00:29.177566   29270 round_trippers.go:580]     Audit-Id: 9018e8a2-b2da-4d90-958a-7d8efa858316
	I1114 14:00:29.177575   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:29.177588   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:29.177597   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:29.178092   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:29.178510   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:29.178524   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:29.178531   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:29.178537   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:29.180913   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:29.180930   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:29.180937   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:29 GMT
	I1114 14:00:29.180945   29270 round_trippers.go:580]     Audit-Id: 5bc9c7b4-e77c-429c-a583-4728ed0e6b6e
	I1114 14:00:29.180954   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:29.180963   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:29.180971   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:29.180983   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:29.181111   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:29.181452   29270 pod_ready.go:102] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"False"
	I1114 14:00:29.674731   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:29.674757   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:29.674766   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:29.674773   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:29.677722   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:29.677751   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:29.677760   29270 round_trippers.go:580]     Audit-Id: b94b0c33-86fa-443a-a5ae-36da6b3071db
	I1114 14:00:29.677767   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:29.677772   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:29.677777   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:29.677783   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:29.677792   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:29 GMT
	I1114 14:00:29.678466   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:29.678967   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:29.678985   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:29.678993   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:29.678998   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:29.681492   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:29.681515   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:29.681523   29270 round_trippers.go:580]     Audit-Id: d3db6195-363d-436e-8937-731c152c31c8
	I1114 14:00:29.681532   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:29.681541   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:29.681548   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:29.681561   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:29.681568   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:29 GMT
	I1114 14:00:29.681929   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:30.174550   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:30.174575   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:30.174587   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:30.174594   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:30.178078   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:30.178103   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:30.178113   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:30.178129   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:30 GMT
	I1114 14:00:30.178136   29270 round_trippers.go:580]     Audit-Id: 3eef2211-454f-4a30-b5d0-a3c243d0c658
	I1114 14:00:30.178143   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:30.178151   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:30.178159   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:30.178306   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:30.178758   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:30.178775   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:30.178786   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:30.178800   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:30.186628   29270 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:00:30.186678   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:30.186692   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:30.186702   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:30.186713   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:30.186723   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:30 GMT
	I1114 14:00:30.186732   29270 round_trippers.go:580]     Audit-Id: c7ccce66-ac90-4636-a56e-7d05c8936d80
	I1114 14:00:30.186764   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:30.187017   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:30.674318   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:30.674350   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:30.674362   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:30.674372   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:30.677783   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:30.677822   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:30.677834   29270 round_trippers.go:580]     Audit-Id: 47c08974-946a-4080-92a7-3c7d88665455
	I1114 14:00:30.677844   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:30.677853   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:30.677863   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:30.677872   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:30.677885   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:30 GMT
	I1114 14:00:30.678100   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:30.678668   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:30.678686   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:30.678697   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:30.678707   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:30.681259   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:30.681280   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:30.681286   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:30 GMT
	I1114 14:00:30.681291   29270 round_trippers.go:580]     Audit-Id: ca1b63d2-ade3-41d6-a451-be6ffab02a1d
	I1114 14:00:30.681297   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:30.681302   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:30.681307   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:30.681312   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:30.681850   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:31.174473   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:31.174501   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:31.174513   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:31.174524   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:31.177240   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:31.177264   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:31.177274   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:31.177283   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:31 GMT
	I1114 14:00:31.177294   29270 round_trippers.go:580]     Audit-Id: a325bca6-f294-4177-a065-1bec63222b25
	I1114 14:00:31.177306   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:31.177317   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:31.177327   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:31.177530   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:31.178105   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:31.178121   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:31.178132   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:31.178142   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:31.180394   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:31.180415   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:31.180424   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:31 GMT
	I1114 14:00:31.180432   29270 round_trippers.go:580]     Audit-Id: 4333579b-3dfe-431a-8c28-f904e6e93ace
	I1114 14:00:31.180445   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:31.180468   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:31.180477   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:31.180486   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:31.180832   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:31.674483   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:31.674515   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:31.674524   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:31.674530   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:31.678880   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:00:31.678909   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:31.678921   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:31.678929   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:31.678936   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:31.678944   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:31 GMT
	I1114 14:00:31.678952   29270 round_trippers.go:580]     Audit-Id: 0bc23209-e6dc-46e3-8c6f-526493cf2ecf
	I1114 14:00:31.678959   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:31.679569   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"791","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6545 chars]
	I1114 14:00:31.680099   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:31.680115   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:31.680126   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:31.680140   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:31.683677   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:31.683699   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:31.683710   29270 round_trippers.go:580]     Audit-Id: 18d5fbac-91b4-4b7a-807b-160325532179
	I1114 14:00:31.683718   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:31.683725   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:31.683734   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:31.683741   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:31.683749   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:31 GMT
	I1114 14:00:31.683878   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:31.684267   29270 pod_ready.go:102] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"False"
	I1114 14:00:32.174449   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:00:32.174471   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.174479   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.174485   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.177487   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.177519   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.177537   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.177547   29270 round_trippers.go:580]     Audit-Id: 85c78eba-c9c6-4db4-92bb-c22cfe702ea4
	I1114 14:00:32.177556   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.177566   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.177575   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.177585   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.177803   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1114 14:00:32.178239   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.178252   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.178259   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.178268   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.180544   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.180564   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.180574   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.180591   29270 round_trippers.go:580]     Audit-Id: 94471581-5faa-4f9c-b8bc-4d713cbad130
	I1114 14:00:32.180598   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.180609   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.180620   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.180632   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.180850   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:32.181208   29270 pod_ready.go:92] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.181225   29270 pod_ready.go:81] duration metric: took 7.520729176s waiting for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.181234   29270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.181286   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-661456
	I1114 14:00:32.181297   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.181307   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.181319   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.183468   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.183487   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.183497   29270 round_trippers.go:580]     Audit-Id: 7b73e2d3-0989-426d-8160-44020b0e9623
	I1114 14:00:32.183506   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.183514   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.183522   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.183527   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.183532   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.183639   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-661456","namespace":"kube-system","uid":"a7fc10f1-0274-4c69-9ce0-a962bdfb4e17","resourceVersion":"890","creationTimestamp":"2023-11-14T13:53:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.mirror":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.seen":"2023-11-14T13:53:24.984306855Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1114 14:00:32.184088   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.184103   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.184114   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.184125   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.186294   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.186315   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.186326   29270 round_trippers.go:580]     Audit-Id: d8404d1a-c4ff-4662-ad82-0ac71c492d75
	I1114 14:00:32.186336   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.186342   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.186353   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.186359   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.186368   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.186525   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:32.186894   29270 pod_ready.go:92] pod "etcd-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.186911   29270 pod_ready.go:81] duration metric: took 5.6713ms waiting for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.186929   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.186979   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-661456
	I1114 14:00:32.186986   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.186993   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.187001   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.188964   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:00:32.188977   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.188983   29270 round_trippers.go:580]     Audit-Id: c3a3c55c-88e1-4862-8825-e401ed9cd7ca
	I1114 14:00:32.188990   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.189001   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.189011   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.189025   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.189034   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.189318   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-661456","namespace":"kube-system","uid":"85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e","resourceVersion":"877","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.mirror":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.seen":"2023-11-14T13:53:33.091288385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1114 14:00:32.189824   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.189842   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.189852   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.189861   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.191881   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.191894   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.191900   29270 round_trippers.go:580]     Audit-Id: a78aa4d4-531a-4062-9d70-92780e7a4caa
	I1114 14:00:32.191906   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.191911   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.191916   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.191921   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.191928   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.192096   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:32.192471   29270 pod_ready.go:92] pod "kube-apiserver-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.192492   29270 pod_ready.go:81] duration metric: took 5.553696ms waiting for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.192505   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.192579   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-661456
	I1114 14:00:32.192589   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.192596   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.192601   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.194738   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.194752   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.194758   29270 round_trippers.go:580]     Audit-Id: 71aeb12f-c2d8-4159-bc36-9dc125e60a8e
	I1114 14:00:32.194763   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.194770   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.194775   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.194782   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.194797   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.194947   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-661456","namespace":"kube-system","uid":"503c91d5-280b-44ab-8801-da2418e2bf6c","resourceVersion":"875","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.mirror":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.seen":"2023-11-14T13:53:33.091289647Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1114 14:00:32.195440   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.195458   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.195465   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.195471   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.197705   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.197720   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.197726   29270 round_trippers.go:580]     Audit-Id: f8aa3aa3-9dcd-4305-b190-625365567bef
	I1114 14:00:32.197731   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.197736   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.197748   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.197762   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.197772   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.197951   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:32.198313   29270 pod_ready.go:92] pod "kube-controller-manager-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.198335   29270 pod_ready.go:81] duration metric: took 5.813477ms waiting for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.198370   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.198435   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:00:32.198446   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.198462   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.198475   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.200506   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.200517   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.200523   29270 round_trippers.go:580]     Audit-Id: cd91be7e-e5bd-4400-96b8-f13829e94cdf
	I1114 14:00:32.200528   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.200533   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.200544   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.200554   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.200567   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.200700   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fkj7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"5d920620-7354-4418-a44e-c7f2965d75a4","resourceVersion":"541","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5544 chars]
	I1114 14:00:32.201121   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:00:32.201134   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.201141   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.201147   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.202819   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:00:32.202830   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.202835   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.202848   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.202856   29270 round_trippers.go:580]     Audit-Id: 340a11dd-fdbe-4c4e-8852-2fafc6a40ee8
	I1114 14:00:32.202865   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.202873   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.202881   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.202988   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"4c94b1f0-8936-49a1-a31c-f75b72563ea3","resourceVersion":"606","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 3267 chars]
	I1114 14:00:32.203279   29270 pod_ready.go:92] pod "kube-proxy-fkj7d" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.203301   29270 pod_ready.go:81] duration metric: took 4.916248ms waiting for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.203313   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.374646   29270 request.go:629] Waited for 171.279548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:00:32.374697   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:00:32.374711   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.374721   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.374733   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.378007   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:32.378031   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.378041   29270 round_trippers.go:580]     Audit-Id: 30ef2246-7d17-4678-bae8-a863b4fdb7fb
	I1114 14:00:32.378050   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.378059   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.378068   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.378076   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.378089   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.378212   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ndrhk","generateName":"kube-proxy-","namespace":"kube-system","uid":"a11d15a6-5476-429f-ae29-445fa22f70dd","resourceVersion":"794","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1114 14:00:32.575064   29270 request.go:629] Waited for 196.433949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.575146   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:32.575152   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.575163   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.575175   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.578098   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:32.578119   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.578128   29270 round_trippers.go:580]     Audit-Id: 58b1358a-bad2-4bb8-81a2-7b4862d4218d
	I1114 14:00:32.578137   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.578144   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.578153   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.578160   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.578168   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.578271   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:32.578565   29270 pod_ready.go:92] pod "kube-proxy-ndrhk" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.578581   29270 pod_ready.go:81] duration metric: took 375.259802ms waiting for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.578593   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.774912   29270 request.go:629] Waited for 196.243985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:00:32.774980   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:00:32.774987   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.774995   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.775004   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.778261   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:32.778281   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.778288   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.778294   29270 round_trippers.go:580]     Audit-Id: 06c24752-6f96-4dc5-9623-e73a35000ac3
	I1114 14:00:32.778301   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.778312   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.778324   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.778336   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.778478   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r9r5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"27ff4b01-cd10-4c7f-99c2-a0fe362d11ad","resourceVersion":"762","creationTimestamp":"2023-11-14T13:55:39Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:55:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5745 chars]
	I1114 14:00:32.975189   29270 request.go:629] Waited for 196.300679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:00:32.975245   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:00:32.975253   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:32.975261   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:32.975270   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:32.981972   29270 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:00:32.981996   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:32.982005   29270 round_trippers.go:580]     Audit-Id: 65c2a5b4-08c9-4e16-a771-276726fd917e
	I1114 14:00:32.982013   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:32.982018   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:32.982023   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:32.982029   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:32.982034   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:32 GMT
	I1114 14:00:32.983045   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"de2fa03b-c47f-4331-a423-2475d21c15ba","resourceVersion":"774","creationTimestamp":"2023-11-14T13:56:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3083 chars]
	I1114 14:00:32.983278   29270 pod_ready.go:92] pod "kube-proxy-r9r5l" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:32.983292   29270 pod_ready.go:81] duration metric: took 404.692071ms waiting for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:32.983300   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:33.174639   29270 request.go:629] Waited for 191.277473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:00:33.174703   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:00:33.174709   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.174717   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.174723   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.177768   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:00:33.177790   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.177800   29270 round_trippers.go:580]     Audit-Id: 509c45a0-c26e-4e3d-ad75-eee565a8c803
	I1114 14:00:33.177809   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.177817   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.177826   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.177834   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.177846   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.178137   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-661456","namespace":"kube-system","uid":"16644b7a-7227-47b7-a06e-94b4dd7b0cce","resourceVersion":"879","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.mirror":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.seen":"2023-11-14T13:53:33.091290734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1114 14:00:33.374966   29270 request.go:629] Waited for 196.363999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:33.375023   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:00:33.375028   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.375035   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.375040   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.377951   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:33.378066   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.378089   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.378102   29270 round_trippers.go:580]     Audit-Id: 3f09f1ff-4a47-4cde-a84a-b6a743701eea
	I1114 14:00:33.378109   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.378114   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.378126   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.378134   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.378281   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:00:33.378660   29270 pod_ready.go:92] pod "kube-scheduler-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:00:33.378684   29270 pod_ready.go:81] duration metric: took 395.378438ms waiting for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:00:33.378694   29270 pod_ready.go:38] duration metric: took 8.730764051s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:00:33.378711   29270 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:00:33.378756   29270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:00:33.394256   29270 command_runner.go:130] > 1637
	I1114 14:00:33.394303   29270 api_server.go:72] duration metric: took 14.622020493s to wait for apiserver process to appear ...
	I1114 14:00:33.394315   29270 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:00:33.394334   29270 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 14:00:33.402867   29270 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I1114 14:00:33.402954   29270 round_trippers.go:463] GET https://192.168.39.222:8443/version
	I1114 14:00:33.402963   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.402974   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.402988   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.404713   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:00:33.404732   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.404743   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.404754   29270 round_trippers.go:580]     Content-Length: 264
	I1114 14:00:33.404762   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.404778   29270 round_trippers.go:580]     Audit-Id: a6faafb8-b4d2-4a9d-9206-739e710e8c87
	I1114 14:00:33.404790   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.404798   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.404809   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.404830   29270 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1114 14:00:33.404879   29270 api_server.go:141] control plane version: v1.28.3
	I1114 14:00:33.404895   29270 api_server.go:131] duration metric: took 10.57335ms to wait for apiserver health ...
	I1114 14:00:33.404907   29270 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:00:33.575337   29270 request.go:629] Waited for 170.352095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:33.575402   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:33.575411   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.575423   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.575435   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.579833   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:00:33.579860   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.579870   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.579878   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.579887   29270 round_trippers.go:580]     Audit-Id: ce104684-e50c-4940-ade0-9a4f628aa31d
	I1114 14:00:33.579909   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.579921   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.579929   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.581727   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"911"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82965 chars]
	I1114 14:00:33.584199   29270 system_pods.go:59] 12 kube-system pods found
	I1114 14:00:33.584220   29270 system_pods.go:61] "coredns-5dd5756b68-kvb7v" [b9c9a98f-d025-408a-ada2-0c19a356b4b9] Running
	I1114 14:00:33.584225   29270 system_pods.go:61] "etcd-multinode-661456" [a7fc10f1-0274-4c69-9ce0-a962bdfb4e17] Running
	I1114 14:00:33.584230   29270 system_pods.go:61] "kindnet-8rqgf" [ed122448-0139-4302-af28-f3d0c97ee881] Running
	I1114 14:00:33.584234   29270 system_pods.go:61] "kindnet-9nvmm" [72d108d5-7995-4aad-9584-f1250560bfa3] Running
	I1114 14:00:33.584238   29270 system_pods.go:61] "kindnet-fjpnd" [1b3a02d4-aa80-421c-8beb-fcc512379320] Running
	I1114 14:00:33.584243   29270 system_pods.go:61] "kube-apiserver-multinode-661456" [85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e] Running
	I1114 14:00:33.584251   29270 system_pods.go:61] "kube-controller-manager-multinode-661456" [503c91d5-280b-44ab-8801-da2418e2bf6c] Running
	I1114 14:00:33.584256   29270 system_pods.go:61] "kube-proxy-fkj7d" [5d920620-7354-4418-a44e-c7f2965d75a4] Running
	I1114 14:00:33.584259   29270 system_pods.go:61] "kube-proxy-ndrhk" [a11d15a6-5476-429f-ae29-445fa22f70dd] Running
	I1114 14:00:33.584263   29270 system_pods.go:61] "kube-proxy-r9r5l" [27ff4b01-cd10-4c7f-99c2-a0fe362d11ad] Running
	I1114 14:00:33.584269   29270 system_pods.go:61] "kube-scheduler-multinode-661456" [16644b7a-7227-47b7-a06e-94b4dd7b0cce] Running
	I1114 14:00:33.584273   29270 system_pods.go:61] "storage-provisioner" [dcfebdb5-371b-432a-ace2-d120fdd17f5e] Running
	I1114 14:00:33.584285   29270 system_pods.go:74] duration metric: took 179.370313ms to wait for pod list to return data ...
	I1114 14:00:33.584291   29270 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:00:33.774673   29270 request.go:629] Waited for 190.310094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I1114 14:00:33.774726   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/default/serviceaccounts
	I1114 14:00:33.774733   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.774741   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.774748   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.777363   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:33.777386   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.777396   29270 round_trippers.go:580]     Content-Length: 261
	I1114 14:00:33.777404   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.777441   29270 round_trippers.go:580]     Audit-Id: fee1eb93-e016-46f5-a5b5-5de809989312
	I1114 14:00:33.777464   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.777476   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.777487   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.777499   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.777540   29270 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"911"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f96f8f2f-0c60-4808-8145-6fa75938be94","resourceVersion":"342","creationTimestamp":"2023-11-14T13:53:45Z"}}]}
	I1114 14:00:33.777757   29270 default_sa.go:45] found service account: "default"
	I1114 14:00:33.777780   29270 default_sa.go:55] duration metric: took 193.479934ms for default service account to be created ...
	I1114 14:00:33.777789   29270 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 14:00:33.975304   29270 request.go:629] Waited for 197.448722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:33.975389   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:00:33.975398   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:33.975412   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:33.975427   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:33.980139   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:00:33.980168   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:33.980183   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:33.980193   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:33.980204   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:33 GMT
	I1114 14:00:33.980212   29270 round_trippers.go:580]     Audit-Id: e659a03e-cc2b-4c5c-aa39-477d3b1ad762
	I1114 14:00:33.980220   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:33.980233   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:33.981925   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"911"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82965 chars]
	I1114 14:00:33.984337   29270 system_pods.go:86] 12 kube-system pods found
	I1114 14:00:33.984358   29270 system_pods.go:89] "coredns-5dd5756b68-kvb7v" [b9c9a98f-d025-408a-ada2-0c19a356b4b9] Running
	I1114 14:00:33.984364   29270 system_pods.go:89] "etcd-multinode-661456" [a7fc10f1-0274-4c69-9ce0-a962bdfb4e17] Running
	I1114 14:00:33.984368   29270 system_pods.go:89] "kindnet-8rqgf" [ed122448-0139-4302-af28-f3d0c97ee881] Running
	I1114 14:00:33.984371   29270 system_pods.go:89] "kindnet-9nvmm" [72d108d5-7995-4aad-9584-f1250560bfa3] Running
	I1114 14:00:33.984375   29270 system_pods.go:89] "kindnet-fjpnd" [1b3a02d4-aa80-421c-8beb-fcc512379320] Running
	I1114 14:00:33.984379   29270 system_pods.go:89] "kube-apiserver-multinode-661456" [85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e] Running
	I1114 14:00:33.984386   29270 system_pods.go:89] "kube-controller-manager-multinode-661456" [503c91d5-280b-44ab-8801-da2418e2bf6c] Running
	I1114 14:00:33.984393   29270 system_pods.go:89] "kube-proxy-fkj7d" [5d920620-7354-4418-a44e-c7f2965d75a4] Running
	I1114 14:00:33.984402   29270 system_pods.go:89] "kube-proxy-ndrhk" [a11d15a6-5476-429f-ae29-445fa22f70dd] Running
	I1114 14:00:33.984406   29270 system_pods.go:89] "kube-proxy-r9r5l" [27ff4b01-cd10-4c7f-99c2-a0fe362d11ad] Running
	I1114 14:00:33.984410   29270 system_pods.go:89] "kube-scheduler-multinode-661456" [16644b7a-7227-47b7-a06e-94b4dd7b0cce] Running
	I1114 14:00:33.984414   29270 system_pods.go:89] "storage-provisioner" [dcfebdb5-371b-432a-ace2-d120fdd17f5e] Running
	I1114 14:00:33.984421   29270 system_pods.go:126] duration metric: took 206.625861ms to wait for k8s-apps to be running ...
	I1114 14:00:33.984430   29270 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:00:33.984479   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:00:34.017139   29270 system_svc.go:56] duration metric: took 32.699747ms WaitForService to wait for kubelet.
	I1114 14:00:34.017167   29270 kubeadm.go:581] duration metric: took 15.244885795s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:00:34.017192   29270 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:00:34.174544   29270 request.go:629] Waited for 157.272211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I1114 14:00:34.174592   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I1114 14:00:34.174597   29270 round_trippers.go:469] Request Headers:
	I1114 14:00:34.174604   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:00:34.174612   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:00:34.177583   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:00:34.177609   29270 round_trippers.go:577] Response Headers:
	I1114 14:00:34.177618   29270 round_trippers.go:580]     Audit-Id: ea1e2e45-5392-40a1-86bd-b4227849e7bd
	I1114 14:00:34.177627   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:00:34.177634   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:00:34.177640   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:00:34.177651   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:00:34.177658   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:00:34 GMT
	I1114 14:00:34.177935   29270 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"911"},"items":[{"metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 13544 chars]
	I1114 14:00:34.178687   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:34.178712   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:34.178723   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:34.178730   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:34.178744   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:00:34.178751   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:00:34.178757   29270 node_conditions.go:105] duration metric: took 161.558714ms to run NodePressure ...
	I1114 14:00:34.178774   29270 start.go:228] waiting for startup goroutines ...
	I1114 14:00:34.178787   29270 start.go:233] waiting for cluster config update ...
	I1114 14:00:34.178800   29270 start.go:242] writing updated cluster config ...
	I1114 14:00:34.179379   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:00:34.179516   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:00:34.183111   29270 out.go:177] * Starting worker node multinode-661456-m02 in cluster multinode-661456
	I1114 14:00:34.184563   29270 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 14:00:34.184586   29270 cache.go:56] Caching tarball of preloaded images
	I1114 14:00:34.184702   29270 preload.go:174] Found /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1114 14:00:34.184718   29270 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1114 14:00:34.184813   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:00:34.185009   29270 start.go:365] acquiring machines lock for multinode-661456-m02: {Name:mka8a7be0fef2cfa89eb7b4f7f1c7ded4441f603 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:00:34.185062   29270 start.go:369] acquired machines lock for "multinode-661456-m02" in 31.592µs
	I1114 14:00:34.185084   29270 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:00:34.185094   29270 fix.go:54] fixHost starting: m02
	I1114 14:00:34.185379   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:00:34.185405   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:00:34.199492   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
	I1114 14:00:34.199902   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:00:34.200320   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:00:34.200339   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:00:34.200682   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:00:34.200852   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:34.200993   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 14:00:34.202584   29270 fix.go:102] recreateIfNeeded on multinode-661456-m02: state=Stopped err=<nil>
	I1114 14:00:34.202609   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	W1114 14:00:34.202761   29270 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:00:34.204558   29270 out.go:177] * Restarting existing kvm2 VM for "multinode-661456-m02" ...
	I1114 14:00:34.206553   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .Start
	I1114 14:00:34.206742   29270 main.go:141] libmachine: (multinode-661456-m02) Ensuring networks are active...
	I1114 14:00:34.207471   29270 main.go:141] libmachine: (multinode-661456-m02) Ensuring network default is active
	I1114 14:00:34.207850   29270 main.go:141] libmachine: (multinode-661456-m02) Ensuring network mk-multinode-661456 is active
	I1114 14:00:34.208232   29270 main.go:141] libmachine: (multinode-661456-m02) Getting domain xml...
	I1114 14:00:34.209043   29270 main.go:141] libmachine: (multinode-661456-m02) Creating domain...
	I1114 14:00:35.446639   29270 main.go:141] libmachine: (multinode-661456-m02) Waiting to get IP...
	I1114 14:00:35.447526   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:35.447921   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:35.448022   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:35.447884   29535 retry.go:31] will retry after 291.255275ms: waiting for machine to come up
	I1114 14:00:35.740465   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:35.740901   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:35.740934   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:35.740845   29535 retry.go:31] will retry after 279.11305ms: waiting for machine to come up
	I1114 14:00:36.021322   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:36.021789   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:36.021816   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:36.021734   29535 retry.go:31] will retry after 336.608019ms: waiting for machine to come up
	I1114 14:00:36.360260   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:36.360757   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:36.360788   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:36.360689   29535 retry.go:31] will retry after 490.282744ms: waiting for machine to come up
	I1114 14:00:36.852207   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:36.852649   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:36.852667   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:36.852619   29535 retry.go:31] will retry after 757.407567ms: waiting for machine to come up
	I1114 14:00:37.611689   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:37.612145   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:37.612174   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:37.612083   29535 retry.go:31] will retry after 676.832516ms: waiting for machine to come up
	I1114 14:00:38.291001   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:38.291468   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:38.291500   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:38.291413   29535 retry.go:31] will retry after 766.45018ms: waiting for machine to come up
	I1114 14:00:39.059581   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:39.060050   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:39.060081   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:39.059998   29535 retry.go:31] will retry after 1.472724153s: waiting for machine to come up
	I1114 14:00:40.534373   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:40.534747   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:40.534773   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:40.534706   29535 retry.go:31] will retry after 1.671577436s: waiting for machine to come up
	I1114 14:00:42.208462   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:42.208773   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:42.208800   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:42.208741   29535 retry.go:31] will retry after 1.7428371s: waiting for machine to come up
	I1114 14:00:43.953849   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:43.954339   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:43.954368   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:43.954289   29535 retry.go:31] will retry after 2.619243146s: waiting for machine to come up
	I1114 14:00:46.576809   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:46.577201   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:46.577235   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:46.577144   29535 retry.go:31] will retry after 3.310278759s: waiting for machine to come up
	I1114 14:00:49.889021   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:49.889382   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | unable to find current IP address of domain multinode-661456-m02 in network mk-multinode-661456
	I1114 14:00:49.889414   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | I1114 14:00:49.889348   29535 retry.go:31] will retry after 3.168707531s: waiting for machine to come up
	I1114 14:00:53.061607   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.062043   29270 main.go:141] libmachine: (multinode-661456-m02) Found IP for machine: 192.168.39.228
	I1114 14:00:53.062067   29270 main.go:141] libmachine: (multinode-661456-m02) Reserving static IP address...
	I1114 14:00:53.062078   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has current primary IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.062522   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "multinode-661456-m02", mac: "52:54:00:17:e2:91", ip: "192.168.39.228"} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.062570   29270 main.go:141] libmachine: (multinode-661456-m02) Reserved static IP address: 192.168.39.228
	I1114 14:00:53.062594   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | skip adding static IP to network mk-multinode-661456 - found existing host DHCP lease matching {name: "multinode-661456-m02", mac: "52:54:00:17:e2:91", ip: "192.168.39.228"}
	I1114 14:00:53.062613   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | Getting to WaitForSSH function...
	I1114 14:00:53.062630   29270 main.go:141] libmachine: (multinode-661456-m02) Waiting for SSH to be available...
	I1114 14:00:53.064528   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.064892   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.064919   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.065074   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | Using SSH client type: external
	I1114 14:00:53.065101   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa (-rw-------)
	I1114 14:00:53.065149   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:00:53.065173   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | About to run SSH command:
	I1114 14:00:53.065194   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | exit 0
	I1114 14:00:53.161095   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | SSH cmd err, output: <nil>: 
	I1114 14:00:53.161464   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetConfigRaw
	I1114 14:00:53.162032   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:00:53.164307   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.164732   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.164777   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.165020   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:00:53.165223   29270 machine.go:88] provisioning docker machine ...
	I1114 14:00:53.165241   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:53.165407   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetMachineName
	I1114 14:00:53.165599   29270 buildroot.go:166] provisioning hostname "multinode-661456-m02"
	I1114 14:00:53.165619   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetMachineName
	I1114 14:00:53.165742   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.167721   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.168093   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.168131   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.168248   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:53.168438   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.168594   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.168740   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:53.168873   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:53.169237   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:53.169251   29270 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-661456-m02 && echo "multinode-661456-m02" | sudo tee /etc/hostname
	I1114 14:00:53.314860   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-661456-m02
	
	I1114 14:00:53.314895   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.317483   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.317863   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.317895   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.318022   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:53.318218   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.318348   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.318455   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:53.318592   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:53.318966   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:53.318993   29270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-661456-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-661456-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-661456-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:00:53.457280   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:00:53.457306   29270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17581-6041/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-6041/.minikube}
	I1114 14:00:53.457324   29270 buildroot.go:174] setting up certificates
	I1114 14:00:53.457332   29270 provision.go:83] configureAuth start
	I1114 14:00:53.457340   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetMachineName
	I1114 14:00:53.457638   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:00:53.460280   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.460643   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.460666   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.460839   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.462912   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.463235   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.463266   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.463370   29270 provision.go:138] copyHostCerts
	I1114 14:00:53.463398   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 14:00:53.463426   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem, removing ...
	I1114 14:00:53.463436   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 14:00:53.463516   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem (1675 bytes)
	I1114 14:00:53.463602   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 14:00:53.463623   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem, removing ...
	I1114 14:00:53.463633   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 14:00:53.463671   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem (1082 bytes)
	I1114 14:00:53.463726   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 14:00:53.463750   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem, removing ...
	I1114 14:00:53.463759   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 14:00:53.463802   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem (1123 bytes)
	I1114 14:00:53.463875   29270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem org=jenkins.multinode-661456-m02 san=[192.168.39.228 192.168.39.228 localhost 127.0.0.1 minikube multinode-661456-m02]
	I1114 14:00:53.640067   29270 provision.go:172] copyRemoteCerts
	I1114 14:00:53.640117   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:00:53.640139   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.642768   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.643120   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.643148   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.643300   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:53.643492   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.643678   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:53.643813   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:00:53.738951   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 14:00:53.739024   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:00:53.763037   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 14:00:53.763099   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 14:00:53.785482   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 14:00:53.785557   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 14:00:53.812333   29270 provision.go:86] duration metric: configureAuth took 354.989338ms
	I1114 14:00:53.812361   29270 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:00:53.812551   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:00:53.812571   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:53.812870   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.815495   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.815937   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.815971   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.816082   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:53.816251   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.816423   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.816546   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:53.816721   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:53.817142   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:53.817162   29270 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1114 14:00:53.951814   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1114 14:00:53.951833   29270 buildroot.go:70] root file system type: tmpfs
	I1114 14:00:53.951938   29270 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1114 14:00:53.951958   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:53.954418   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.954745   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:53.954786   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:53.954987   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:53.955193   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.955345   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:53.955480   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:53.955622   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:53.956067   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:53.956135   29270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.222"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1114 14:00:54.104702   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.222
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1114 14:00:54.104730   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:54.107422   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:54.107697   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:54.107729   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:54.107941   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:54.108149   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:54.108305   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:54.108484   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:54.108622   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:54.108957   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:54.108982   29270 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1114 14:00:54.952881   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1114 14:00:54.952906   29270 machine.go:91] provisioned docker machine in 1.787671015s
	I1114 14:00:54.952922   29270 start.go:300] post-start starting for "multinode-661456-m02" (driver="kvm2")
	I1114 14:00:54.952948   29270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:00:54.952972   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:54.953273   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:00:54.953298   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:54.955735   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:54.956109   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:54.956142   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:54.956264   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:54.956444   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:54.956561   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:54.956710   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:00:55.057699   29270 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:00:55.062006   29270 command_runner.go:130] > NAME=Buildroot
	I1114 14:00:55.062023   29270 command_runner.go:130] > VERSION=2021.02.12-1-gccdd192-dirty
	I1114 14:00:55.062027   29270 command_runner.go:130] > ID=buildroot
	I1114 14:00:55.062033   29270 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 14:00:55.062037   29270 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 14:00:55.062301   29270 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:00:55.062320   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/addons for local assets ...
	I1114 14:00:55.062397   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/files for local assets ...
	I1114 14:00:55.062488   29270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> 132382.pem in /etc/ssl/certs
	I1114 14:00:55.062500   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /etc/ssl/certs/132382.pem
	I1114 14:00:55.062575   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:00:55.073525   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:00:55.097834   29270 start.go:303] post-start completed in 144.895408ms
	I1114 14:00:55.097867   29270 fix.go:56] fixHost completed within 20.912771972s
	I1114 14:00:55.097886   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:55.100571   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.100906   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:55.100934   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.101231   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:55.101475   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:55.101612   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:55.101729   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:55.101875   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:00:55.102262   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I1114 14:00:55.102278   29270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:00:55.238745   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699970455.186602987
	
	I1114 14:00:55.238771   29270 fix.go:206] guest clock: 1699970455.186602987
	I1114 14:00:55.238780   29270 fix.go:219] Guest: 2023-11-14 14:00:55.186602987 +0000 UTC Remote: 2023-11-14 14:00:55.097871512 +0000 UTC m=+82.493411154 (delta=88.731475ms)
	I1114 14:00:55.238797   29270 fix.go:190] guest clock delta is within tolerance: 88.731475ms
	I1114 14:00:55.238804   29270 start.go:83] releasing machines lock for "multinode-661456-m02", held for 21.053730059s
	I1114 14:00:55.238829   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:55.239106   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:00:55.241499   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.241891   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:55.241919   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.243582   29270 out.go:177] * Found network options:
	I1114 14:00:55.244866   29270 out.go:177]   - NO_PROXY=192.168.39.222
	W1114 14:00:55.246298   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 14:00:55.246336   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:55.246900   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:55.247071   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 14:00:55.247154   29270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:00:55.247194   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	W1114 14:00:55.247287   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 14:00:55.247367   29270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:00:55.247388   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 14:00:55.249882   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.250225   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.250257   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:55.250281   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.250402   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:55.250575   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:55.250703   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:55.250717   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:55.250763   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:55.250881   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:00:55.251008   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 14:00:55.251464   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 14:00:55.251659   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 14:00:55.251802   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 14:00:55.373138   29270 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 14:00:55.373189   29270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:00:55.373206   29270 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 14:00:55.373242   29270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:00:55.390084   29270 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 14:00:55.390158   29270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:00:55.390175   29270 start.go:472] detecting cgroup driver to use...
	I1114 14:00:55.390303   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:00:55.408825   29270 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1114 14:00:55.408892   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 14:00:55.419683   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 14:00:55.429844   29270 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 14:00:55.429897   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 14:00:55.440153   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:00:55.450592   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 14:00:55.461077   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:00:55.471206   29270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:00:55.481799   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 14:00:55.492939   29270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:00:55.503773   29270 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 14:00:55.503847   29270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:00:55.513844   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:00:55.619658   29270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 14:00:55.637164   29270 start.go:472] detecting cgroup driver to use...
	I1114 14:00:55.637260   29270 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1114 14:00:55.664195   29270 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1114 14:00:55.664223   29270 command_runner.go:130] > [Unit]
	I1114 14:00:55.664232   29270 command_runner.go:130] > Description=Docker Application Container Engine
	I1114 14:00:55.664241   29270 command_runner.go:130] > Documentation=https://docs.docker.com
	I1114 14:00:55.664250   29270 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1114 14:00:55.664260   29270 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1114 14:00:55.664267   29270 command_runner.go:130] > StartLimitBurst=3
	I1114 14:00:55.664275   29270 command_runner.go:130] > StartLimitIntervalSec=60
	I1114 14:00:55.664292   29270 command_runner.go:130] > [Service]
	I1114 14:00:55.664301   29270 command_runner.go:130] > Type=notify
	I1114 14:00:55.664308   29270 command_runner.go:130] > Restart=on-failure
	I1114 14:00:55.664319   29270 command_runner.go:130] > Environment=NO_PROXY=192.168.39.222
	I1114 14:00:55.664327   29270 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1114 14:00:55.664339   29270 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1114 14:00:55.664348   29270 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1114 14:00:55.664355   29270 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1114 14:00:55.664364   29270 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1114 14:00:55.664375   29270 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1114 14:00:55.664391   29270 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1114 14:00:55.664407   29270 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1114 14:00:55.664419   29270 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1114 14:00:55.664425   29270 command_runner.go:130] > ExecStart=
	I1114 14:00:55.664445   29270 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1114 14:00:55.664454   29270 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1114 14:00:55.664466   29270 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1114 14:00:55.664481   29270 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1114 14:00:55.664493   29270 command_runner.go:130] > LimitNOFILE=infinity
	I1114 14:00:55.664504   29270 command_runner.go:130] > LimitNPROC=infinity
	I1114 14:00:55.664511   29270 command_runner.go:130] > LimitCORE=infinity
	I1114 14:00:55.664523   29270 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1114 14:00:55.664534   29270 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1114 14:00:55.664540   29270 command_runner.go:130] > TasksMax=infinity
	I1114 14:00:55.664545   29270 command_runner.go:130] > TimeoutStartSec=0
	I1114 14:00:55.664554   29270 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1114 14:00:55.664559   29270 command_runner.go:130] > Delegate=yes
	I1114 14:00:55.664569   29270 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1114 14:00:55.664585   29270 command_runner.go:130] > KillMode=process
	I1114 14:00:55.664595   29270 command_runner.go:130] > [Install]
	I1114 14:00:55.664604   29270 command_runner.go:130] > WantedBy=multi-user.target
	I1114 14:00:55.664737   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:00:55.679853   29270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:00:55.699178   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:00:55.712348   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:00:55.724693   29270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 14:00:55.754430   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:00:55.768125   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:00:55.785513   29270 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1114 14:00:55.785903   29270 ssh_runner.go:195] Run: which cri-dockerd
	I1114 14:00:55.790281   29270 command_runner.go:130] > /usr/bin/cri-dockerd
	I1114 14:00:55.790387   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1114 14:00:55.799687   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1114 14:00:55.815896   29270 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1114 14:00:55.922811   29270 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1114 14:00:56.029801   29270 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1114 14:00:56.029843   29270 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1114 14:00:56.046854   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:00:56.153633   29270 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1114 14:00:57.615740   29270 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.46205365s)
	I1114 14:00:57.615824   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:00:57.723047   29270 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1114 14:00:57.841356   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:00:57.952319   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:00:58.074175   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1114 14:00:58.092808   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:00:58.198772   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1114 14:00:58.280633   29270 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1114 14:00:58.280694   29270 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1114 14:00:58.286265   29270 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1114 14:00:58.286294   29270 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 14:00:58.286344   29270 command_runner.go:130] > Device: 16h/22d	Inode: 826         Links: 1
	I1114 14:00:58.286364   29270 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1114 14:00:58.286374   29270 command_runner.go:130] > Access: 2023-11-14 14:00:58.159425392 +0000
	I1114 14:00:58.286385   29270 command_runner.go:130] > Modify: 2023-11-14 14:00:58.159425392 +0000
	I1114 14:00:58.286396   29270 command_runner.go:130] > Change: 2023-11-14 14:00:58.161425392 +0000
	I1114 14:00:58.286403   29270 command_runner.go:130] >  Birth: -
	I1114 14:00:58.286435   29270 start.go:540] Will wait 60s for crictl version
	I1114 14:00:58.286497   29270 ssh_runner.go:195] Run: which crictl
	I1114 14:00:58.290603   29270 command_runner.go:130] > /usr/bin/crictl
	I1114 14:00:58.290715   29270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:00:58.357767   29270 command_runner.go:130] > Version:  0.1.0
	I1114 14:00:58.357785   29270 command_runner.go:130] > RuntimeName:  docker
	I1114 14:00:58.357792   29270 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1114 14:00:58.357800   29270 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 14:00:58.357822   29270 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1114 14:00:58.357882   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:00:58.384247   29270 command_runner.go:130] > 24.0.7
	I1114 14:00:58.385702   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:00:58.411020   29270 command_runner.go:130] > 24.0.7
	I1114 14:00:58.413680   29270 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1114 14:00:58.414941   29270 out.go:177]   - env NO_PROXY=192.168.39.222
	I1114 14:00:58.416693   29270 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 14:00:58.419335   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:58.419670   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:00:46 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 14:00:58.419704   29270 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 14:00:58.419887   29270 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:00:58.424076   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:00:58.436217   29270 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456 for IP: 192.168.39.228
	I1114 14:00:58.436251   29270 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3fe4539ce9ed96ff0e979200082f9548591da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:00:58.436440   29270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key
	I1114 14:00:58.436492   29270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key
	I1114 14:00:58.436505   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 14:00:58.436520   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 14:00:58.436533   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 14:00:58.436545   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 14:00:58.436592   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem (1338 bytes)
	W1114 14:00:58.436620   29270 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238_empty.pem, impossibly tiny 0 bytes
	I1114 14:00:58.436638   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem (1679 bytes)
	I1114 14:00:58.436664   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:00:58.436689   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:00:58.436710   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem (1675 bytes)
	I1114 14:00:58.436746   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:00:58.436770   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:00:58.436783   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem -> /usr/share/ca-certificates/13238.pem
	I1114 14:00:58.436797   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /usr/share/ca-certificates/132382.pem
	I1114 14:00:58.437144   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:00:58.461303   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1114 14:00:58.484286   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:00:58.506806   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:00:58.528712   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:00:58.550268   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem --> /usr/share/ca-certificates/13238.pem (1338 bytes)
	I1114 14:00:58.572240   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /usr/share/ca-certificates/132382.pem (1708 bytes)
	I1114 14:00:58.595127   29270 ssh_runner.go:195] Run: openssl version
	I1114 14:00:58.600852   29270 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 14:00:58.600999   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:00:58.612179   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:00:58.616788   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:00:58.616953   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:00:58.617003   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:00:58.622458   29270 command_runner.go:130] > b5213941
	I1114 14:00:58.622812   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:00:58.633326   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13238.pem && ln -fs /usr/share/ca-certificates/13238.pem /etc/ssl/certs/13238.pem"
	I1114 14:00:58.643723   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13238.pem
	I1114 14:00:58.648127   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 14:00:58.648186   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 14:00:58.648242   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13238.pem
	I1114 14:00:58.653720   29270 command_runner.go:130] > 51391683
	I1114 14:00:58.653772   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13238.pem /etc/ssl/certs/51391683.0"
	I1114 14:00:58.664006   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132382.pem && ln -fs /usr/share/ca-certificates/132382.pem /etc/ssl/certs/132382.pem"
	I1114 14:00:58.674263   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132382.pem
	I1114 14:00:58.678658   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 14:00:58.678927   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 14:00:58.678979   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132382.pem
	I1114 14:00:58.684853   29270 command_runner.go:130] > 3ec20f2e
	I1114 14:00:58.685067   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132382.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:00:58.695268   29270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:00:58.699141   29270 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:00:58.699307   29270 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:00:58.699382   29270 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1114 14:00:58.726935   29270 command_runner.go:130] > cgroupfs
	I1114 14:00:58.727034   29270 cni.go:84] Creating CNI manager for ""
	I1114 14:00:58.727050   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 14:00:58.727063   29270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:00:58.727091   29270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-661456 NodeName:multinode-661456-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:00:58.727241   29270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-661456-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:00:58.727306   29270 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-661456-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:00:58.727370   29270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:00:58.737675   29270 command_runner.go:130] > kubeadm
	I1114 14:00:58.737698   29270 command_runner.go:130] > kubectl
	I1114 14:00:58.737704   29270 command_runner.go:130] > kubelet
	I1114 14:00:58.737726   29270 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:00:58.737776   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 14:00:58.747143   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1114 14:00:58.763909   29270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:00:58.779913   29270 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I1114 14:00:58.783727   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:00:58.795363   29270 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:00:58.795682   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:00:58.795766   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:00:58.795797   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:00:58.809750   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43553
	I1114 14:00:58.810234   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:00:58.810752   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:00:58.810777   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:00:58.811090   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:00:58.811268   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:00:58.811425   29270 start.go:304] JoinCluster: &{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:00:58.811572   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 14:00:58.811593   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:00:58.814326   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:00:58.814766   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:00:58.814797   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:00:58.814907   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:00:58.815082   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:00:58.815229   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:00:58.815358   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:00:58.976114   29270 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qph9ky.d4940nn23xp2m9jj --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 
	I1114 14:00:58.976565   29270 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:00:58.976596   29270 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:00:58.976892   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:00:58.976921   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:00:58.991435   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41499
	I1114 14:00:58.991864   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:00:58.992265   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:00:58.992288   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:00:58.992547   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:00:58.992717   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:00:58.992847   29270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-661456-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1114 14:00:58.992861   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:00:58.995629   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:00:58.996052   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:00:58.996091   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:00:58.996183   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:00:58.996368   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:00:58.996499   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:00:58.996649   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:00:59.186881   29270 command_runner.go:130] > node/multinode-661456-m02 cordoned
	I1114 14:01:02.226373   29270 command_runner.go:130] > pod "busybox-5bc68d56bd-tx7cv" has DeletionTimestamp older than 1 seconds, skipping
	I1114 14:01:02.226420   29270 command_runner.go:130] > node/multinode-661456-m02 drained
	I1114 14:01:02.228435   29270 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1114 14:01:02.228461   29270 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-8rqgf, kube-system/kube-proxy-fkj7d
	I1114 14:01:02.228516   29270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-661456-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.23561678s)
	I1114 14:01:02.228550   29270 node.go:108] successfully drained node "m02"
	I1114 14:01:02.228887   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:02.229164   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:02.229585   29270 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1114 14:01:02.229654   29270 round_trippers.go:463] DELETE https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:02.229667   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:02.229680   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:02.229691   29270 round_trippers.go:473]     Content-Type: application/json
	I1114 14:01:02.229707   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:02.236533   29270 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:01:02.236561   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:02.236575   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:02.236583   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:02.236591   29270 round_trippers.go:580]     Content-Length: 171
	I1114 14:01:02.236598   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:02 GMT
	I1114 14:01:02.236604   29270 round_trippers.go:580]     Audit-Id: 6a937dce-87e8-4841-83f5-90281410b765
	I1114 14:01:02.236611   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:02.236618   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:02.236645   29270 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-661456-m02","kind":"nodes","uid":"4c94b1f0-8936-49a1-a31c-f75b72563ea3"}}
	I1114 14:01:02.236680   29270 node.go:124] successfully deleted node "m02"
	I1114 14:01:02.236695   29270 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:02.236724   29270 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:02.236749   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qph9ky.d4940nn23xp2m9jj --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-661456-m02"
	I1114 14:01:02.338542   29270 command_runner.go:130] ! W1114 14:01:02.285739    1157 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1114 14:01:02.603173   29270 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:01:04.320774   29270 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 14:01:04.320799   29270 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 14:01:04.320812   29270 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 14:01:04.320835   29270 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:01:04.320846   29270 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:01:04.320856   29270 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 14:01:04.320868   29270 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 14:01:04.320882   29270 command_runner.go:130] > This node has joined the cluster:
	I1114 14:01:04.320894   29270 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 14:01:04.320907   29270 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 14:01:04.320922   29270 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 14:01:04.320950   29270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qph9ky.d4940nn23xp2m9jj --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-661456-m02": (2.084174486s)
	I1114 14:01:04.320976   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 14:01:04.655046   29270 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1114 14:01:04.655133   29270 start.go:306] JoinCluster complete in 5.843705852s
	I1114 14:01:04.655160   29270 cni.go:84] Creating CNI manager for ""
	I1114 14:01:04.655167   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 14:01:04.655224   29270 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 14:01:04.661311   29270 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 14:01:04.661337   29270 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 14:01:04.661345   29270 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 14:01:04.661354   29270 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:01:04.661363   29270 command_runner.go:130] > Access: 2023-11-14 13:59:45.502880393 +0000
	I1114 14:01:04.661370   29270 command_runner.go:130] > Modify: 2023-11-11 02:04:07.000000000 +0000
	I1114 14:01:04.661379   29270 command_runner.go:130] > Change: 2023-11-14 13:59:43.657880393 +0000
	I1114 14:01:04.661390   29270 command_runner.go:130] >  Birth: -
	I1114 14:01:04.661451   29270 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 14:01:04.661464   29270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 14:01:04.684673   29270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 14:01:05.003527   29270 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:01:05.011876   29270 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:01:05.017370   29270 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 14:01:05.032538   29270 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 14:01:05.036245   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:05.036491   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:05.036891   29270 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:01:05.036910   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:05.036922   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:05.036942   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:05.039627   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:05.039648   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:05.039658   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:05.039667   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:05.039675   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:05.039689   29270 round_trippers.go:580]     Content-Length: 291
	I1114 14:01:05.039700   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:05 GMT
	I1114 14:01:05.039712   29270 round_trippers.go:580]     Audit-Id: 569dc686-c8a9-4fc3-b206-7284905b67ed
	I1114 14:01:05.039723   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:05.039753   29270 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9d8407fd-076f-444c-a235-0048e6022d7e","resourceVersion":"906","creationTimestamp":"2023-11-14T13:53:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 14:01:05.039853   29270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-661456" context rescaled to 1 replicas
	I1114 14:01:05.039888   29270 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:05.042728   29270 out.go:177] * Verifying Kubernetes components...
	I1114 14:01:05.044340   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:01:05.059305   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:05.059526   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:05.059731   29270 node_ready.go:35] waiting up to 6m0s for node "multinode-661456-m02" to be "Ready" ...
	I1114 14:01:05.059787   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:05.059795   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:05.059802   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:05.059808   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:05.062631   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:05.062653   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:05.062664   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:05.062673   29270 round_trippers.go:580]     Content-Length: 4030
	I1114 14:01:05.062682   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:05 GMT
	I1114 14:01:05.062691   29270 round_trippers.go:580]     Audit-Id: 8b4dcd76-44a7-4d87-a0d9-608e6c9cdf34
	I1114 14:01:05.062705   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:05.062714   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:05.062724   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:05.062823   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"973","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I1114 14:01:05.063177   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:05.063201   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:05.063212   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:05.063221   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:05.065409   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:05.065448   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:05.065460   29270 round_trippers.go:580]     Audit-Id: d18f7660-908b-479c-b4ce-ff1493e4df84
	I1114 14:01:05.065468   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:05.065475   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:05.065484   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:05.065509   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:05.065517   29270 round_trippers.go:580]     Content-Length: 4030
	I1114 14:01:05.065524   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:05 GMT
	I1114 14:01:05.065622   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"973","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I1114 14:01:05.566646   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:05.566674   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:05.566683   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:05.566688   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:05.569389   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:05.569417   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:05.569424   29270 round_trippers.go:580]     Audit-Id: 71489d81-a657-42c7-9533-2b471325f0d9
	I1114 14:01:05.569450   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:05.569460   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:05.569473   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:05.569483   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:05.569491   29270 round_trippers.go:580]     Content-Length: 4030
	I1114 14:01:05.569497   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:05 GMT
	I1114 14:01:05.569584   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"973","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I1114 14:01:06.066089   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:06.066128   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:06.066137   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:06.066143   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:06.069174   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:06.069197   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:06.069207   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:06.069216   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:06.069225   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:06.069238   29270 round_trippers.go:580]     Content-Length: 4030
	I1114 14:01:06.069251   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:06 GMT
	I1114 14:01:06.069259   29270 round_trippers.go:580]     Audit-Id: 3df6021f-d4f0-47e3-99f8-aa9477019ccd
	I1114 14:01:06.069271   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:06.069376   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"973","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I1114 14:01:06.566895   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:06.566919   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:06.566927   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:06.566938   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:06.570542   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:06.570575   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:06.570585   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:06.570593   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:06.570602   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:06.570611   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:06.570621   29270 round_trippers.go:580]     Content-Length: 4030
	I1114 14:01:06.570632   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:06 GMT
	I1114 14:01:06.570642   29270 round_trippers.go:580]     Audit-Id: 76d9ea5b-9d76-46bf-a780-016146490ed7
	I1114 14:01:06.570695   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"973","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 3006 chars]
	I1114 14:01:07.066219   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:07.066244   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:07.066252   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:07.066259   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:07.069664   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:07.069691   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:07.069703   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:07.069712   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:07 GMT
	I1114 14:01:07.069727   29270 round_trippers.go:580]     Audit-Id: d49004d8-732f-44e8-9ce2-11c1370a1885
	I1114 14:01:07.069740   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:07.069751   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:07.069760   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:07.070327   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:07.070579   29270 node_ready.go:58] node "multinode-661456-m02" has status "Ready":"False"
	I1114 14:01:07.566000   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:07.566022   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:07.566030   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:07.566036   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:07.568912   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:07.568931   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:07.568938   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:07.568943   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:07.568949   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:07.568954   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:07 GMT
	I1114 14:01:07.568964   29270 round_trippers.go:580]     Audit-Id: 3637440f-dc6a-4ea1-8c85-05c1b985ccbf
	I1114 14:01:07.568972   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:07.569117   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:08.066614   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:08.066643   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:08.066652   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:08.066658   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:08.069379   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:08.069401   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:08.069408   29270 round_trippers.go:580]     Audit-Id: 4878efba-bf2c-48db-9b21-147e12aa48b6
	I1114 14:01:08.069413   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:08.069418   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:08.069424   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:08.069451   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:08.069459   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:08 GMT
	I1114 14:01:08.069552   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:08.566735   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:08.566766   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:08.566778   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:08.566789   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:08.569511   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:08.569544   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:08.569555   29270 round_trippers.go:580]     Audit-Id: 807db630-7163-4e89-95aa-2ab049f46dc5
	I1114 14:01:08.569562   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:08.569569   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:08.569577   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:08.569586   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:08.569598   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:08 GMT
	I1114 14:01:08.569703   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:09.066406   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:09.066427   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:09.066436   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:09.066442   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:09.070050   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:09.070071   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:09.070078   29270 round_trippers.go:580]     Audit-Id: 37987530-ed5c-4aad-90dd-1e504bad49b7
	I1114 14:01:09.070083   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:09.070089   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:09.070093   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:09.070099   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:09.070104   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:09 GMT
	I1114 14:01:09.070393   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:09.070629   29270 node_ready.go:58] node "multinode-661456-m02" has status "Ready":"False"
	I1114 14:01:09.566469   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:09.566489   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:09.566497   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:09.566503   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:09.569218   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:09.569240   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:09.569247   29270 round_trippers.go:580]     Audit-Id: 92dd3d5f-b340-4809-8762-bee3f22bd932
	I1114 14:01:09.569252   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:09.569258   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:09.569263   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:09.569268   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:09.569275   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:09 GMT
	I1114 14:01:09.569473   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:10.066170   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:10.066204   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:10.066216   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:10.066225   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:10.069362   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:10.069379   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:10.069386   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:10 GMT
	I1114 14:01:10.069391   29270 round_trippers.go:580]     Audit-Id: d02d1657-6816-45bb-be8a-a04d71cbc39a
	I1114 14:01:10.069396   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:10.069401   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:10.069406   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:10.069411   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:10.069835   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:10.566455   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:10.566477   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:10.566485   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:10.566491   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:10.569105   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:10.569127   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:10.569137   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:10 GMT
	I1114 14:01:10.569146   29270 round_trippers.go:580]     Audit-Id: 0a4023f7-a6f5-45bf-abcb-d3a606b986bc
	I1114 14:01:10.569153   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:10.569161   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:10.569169   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:10.569177   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:10.569286   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:11.067008   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:11.067039   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:11.067056   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:11.067066   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:11.069959   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:11.069976   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:11.069983   29270 round_trippers.go:580]     Audit-Id: ec413fe3-ee7d-492d-8438-4535af190aee
	I1114 14:01:11.069998   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:11.070003   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:11.070008   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:11.070014   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:11.070019   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:11 GMT
	I1114 14:01:11.070136   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:11.566957   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:11.566994   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:11.567006   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:11.567017   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:11.571164   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:01:11.571187   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:11.571203   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:11.571211   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:11.571219   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:11.571235   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:11 GMT
	I1114 14:01:11.571242   29270 round_trippers.go:580]     Audit-Id: 0ba20ac3-5d5c-48f5-bbb4-b878ccdb7506
	I1114 14:01:11.571248   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:11.571476   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:11.571771   29270 node_ready.go:58] node "multinode-661456-m02" has status "Ready":"False"
	I1114 14:01:12.066138   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:12.066160   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:12.066170   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:12.066177   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:12.068483   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:12.068505   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:12.068515   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:12 GMT
	I1114 14:01:12.068522   29270 round_trippers.go:580]     Audit-Id: bdf69f29-6735-432c-81ef-05f354d94c96
	I1114 14:01:12.068529   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:12.068536   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:12.068544   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:12.068553   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:12.068747   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:12.566402   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:12.566427   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:12.566441   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:12.566449   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:12.569489   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:12.569510   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:12.569520   29270 round_trippers.go:580]     Audit-Id: 7cab39b3-f305-4e88-829b-d2872b93c22d
	I1114 14:01:12.569529   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:12.569536   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:12.569543   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:12.569564   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:12.569577   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:12 GMT
	I1114 14:01:12.569862   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:13.066325   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:13.066358   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.066371   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.066380   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.069018   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.069041   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.069048   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.069053   29270 round_trippers.go:580]     Audit-Id: 20479fe8-4682-4b7c-917d-28a502ed7957
	I1114 14:01:13.069061   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.069069   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.069078   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.069088   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.069357   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"995","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3115 chars]
	I1114 14:01:13.566046   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:13.566077   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.566086   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.566092   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.569030   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.569060   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.569070   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.569079   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.569087   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.569095   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.569103   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.569111   29270 round_trippers.go:580]     Audit-Id: 5591f550-ca91-44c6-9b9a-3c509445d4ac
	I1114 14:01:13.569240   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"1010","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3374 chars]
	I1114 14:01:13.569576   29270 node_ready.go:49] node "multinode-661456-m02" has status "Ready":"True"
	I1114 14:01:13.569599   29270 node_ready.go:38] duration metric: took 8.509853564s waiting for node "multinode-661456-m02" to be "Ready" ...
	I1114 14:01:13.569610   29270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:01:13.569675   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:01:13.569687   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.569697   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.569709   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.572953   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:13.572966   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.572974   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.572981   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.572987   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.572994   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.573001   29270 round_trippers.go:580]     Audit-Id: 56dccaa2-358b-4d42-a62c-5c5957d847a4
	I1114 14:01:13.573017   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.575249   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1011"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 83827 chars]
	I1114 14:01:13.577642   29270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.577714   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:01:13.577726   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.577735   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.577741   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.580084   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.580103   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.580113   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.580121   29270 round_trippers.go:580]     Audit-Id: 861b4735-ac30-4075-81e8-10141cdd3e99
	I1114 14:01:13.580129   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.580137   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.580145   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.580153   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.580491   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1114 14:01:13.580894   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:13.580907   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.580914   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.580919   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.582969   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.582983   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.582989   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.582994   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.583000   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.583005   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.583010   29270 round_trippers.go:580]     Audit-Id: da3f65c2-68c9-4fc6-b904-0ff06a3fca3e
	I1114 14:01:13.583015   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.583338   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:13.583650   29270 pod_ready.go:92] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:13.583665   29270 pod_ready.go:81] duration metric: took 6.004136ms waiting for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.583674   29270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.583731   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-661456
	I1114 14:01:13.583742   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.583753   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.583765   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.585904   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.585917   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.585923   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.585928   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.585933   29270 round_trippers.go:580]     Audit-Id: e6a1f69e-3ddf-4368-bb1c-1424cba6a80b
	I1114 14:01:13.585939   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.585947   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.585955   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.586214   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-661456","namespace":"kube-system","uid":"a7fc10f1-0274-4c69-9ce0-a962bdfb4e17","resourceVersion":"890","creationTimestamp":"2023-11-14T13:53:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.mirror":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.seen":"2023-11-14T13:53:24.984306855Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1114 14:01:13.586580   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:13.586592   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.586599   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.586605   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.590036   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:13.590059   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.590069   29270 round_trippers.go:580]     Audit-Id: 60bbf978-9f98-4078-acef-7fbee95c2073
	I1114 14:01:13.590075   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.590080   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.590085   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.590090   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.590095   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.590213   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:13.590506   29270 pod_ready.go:92] pod "etcd-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:13.590526   29270 pod_ready.go:81] duration metric: took 6.84609ms waiting for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.590541   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.590605   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-661456
	I1114 14:01:13.590616   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.590625   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.590637   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.594469   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:13.594486   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.594493   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.594502   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.594510   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.594518   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.594527   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.594539   29270 round_trippers.go:580]     Audit-Id: 1199d63a-a3cd-4d32-adda-af489a5736db
	I1114 14:01:13.594953   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-661456","namespace":"kube-system","uid":"85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e","resourceVersion":"877","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.mirror":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.seen":"2023-11-14T13:53:33.091288385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1114 14:01:13.595325   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:13.595337   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.595344   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.595349   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.597399   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.597418   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.597440   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.597450   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.597457   29270 round_trippers.go:580]     Audit-Id: 2b071607-b737-4db6-9edb-4ef7f9a908a2
	I1114 14:01:13.597462   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.597470   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.597475   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.597650   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:13.597934   29270 pod_ready.go:92] pod "kube-apiserver-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:13.597948   29270 pod_ready.go:81] duration metric: took 7.399917ms waiting for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.597956   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.598008   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-661456
	I1114 14:01:13.598015   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.598022   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.598029   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.600213   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.600232   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.600239   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.600245   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.600250   29270 round_trippers.go:580]     Audit-Id: ce0d7695-c568-4939-890b-b05c6124085c
	I1114 14:01:13.600255   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.600260   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.600265   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.600452   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-661456","namespace":"kube-system","uid":"503c91d5-280b-44ab-8801-da2418e2bf6c","resourceVersion":"875","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.mirror":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.seen":"2023-11-14T13:53:33.091289647Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1114 14:01:13.600829   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:13.600842   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.600850   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.600855   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.603339   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.603362   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.603372   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.603378   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.603383   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.603388   29270 round_trippers.go:580]     Audit-Id: 5fe5d53f-b8eb-4ec7-9f9c-3937311e064f
	I1114 14:01:13.603393   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.603400   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.603501   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:13.603875   29270 pod_ready.go:92] pod "kube-controller-manager-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:13.603894   29270 pod_ready.go:81] duration metric: took 5.930141ms waiting for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.603907   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.766206   29270 request.go:629] Waited for 162.242985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:01:13.766272   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:01:13.766277   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.766284   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.766292   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.769023   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.769041   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.769048   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.769053   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.769062   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.769070   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.769081   29270 round_trippers.go:580]     Audit-Id: 9b97237a-79eb-42ea-ae41-32f7247292d6
	I1114 14:01:13.769090   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.769221   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fkj7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"5d920620-7354-4418-a44e-c7f2965d75a4","resourceVersion":"980","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1114 14:01:13.966960   29270 request.go:629] Waited for 197.23227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:13.967039   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:01:13.967046   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:13.967058   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:13.967071   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:13.969874   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:13.969894   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:13.969901   29270 round_trippers.go:580]     Audit-Id: cb1daded-bb39-4fc2-b7e5-f9a69c4460f6
	I1114 14:01:13.969906   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:13.969912   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:13.969917   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:13.969922   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:13.969930   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:13 GMT
	I1114 14:01:13.970119   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"1010","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3374 chars]
	I1114 14:01:13.970368   29270 pod_ready.go:92] pod "kube-proxy-fkj7d" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:13.970381   29270 pod_ready.go:81] duration metric: took 366.467552ms waiting for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:13.970390   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:14.166918   29270 request.go:629] Waited for 196.477335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:01:14.166994   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:01:14.167001   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:14.167010   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:14.167020   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:14.170023   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:14.170044   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:14.170055   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:14.170064   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:14.170072   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:14.170079   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:14 GMT
	I1114 14:01:14.170084   29270 round_trippers.go:580]     Audit-Id: de37bd7c-cbdd-4f36-a769-14a84d6da2d7
	I1114 14:01:14.170092   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:14.170317   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ndrhk","generateName":"kube-proxy-","namespace":"kube-system","uid":"a11d15a6-5476-429f-ae29-445fa22f70dd","resourceVersion":"794","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1114 14:01:14.367107   29270 request.go:629] Waited for 196.339126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:14.367171   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:14.367176   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:14.367185   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:14.367193   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:14.369625   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:14.369649   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:14.369657   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:14 GMT
	I1114 14:01:14.369663   29270 round_trippers.go:580]     Audit-Id: 75fba9dd-d6e9-4b66-9200-5e1a4ba5dfa1
	I1114 14:01:14.369667   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:14.369672   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:14.369680   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:14.369689   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:14.370098   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:14.370522   29270 pod_ready.go:92] pod "kube-proxy-ndrhk" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:14.370542   29270 pod_ready.go:81] duration metric: took 400.145964ms waiting for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:14.370555   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:14.566993   29270 request.go:629] Waited for 196.368215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:01:14.567048   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:01:14.567054   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:14.567062   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:14.567068   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:14.570085   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:14.570115   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:14.570127   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:14.570136   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:14.570146   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:14.570154   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:14.570162   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:14 GMT
	I1114 14:01:14.570169   29270 round_trippers.go:580]     Audit-Id: 986ec3cf-672b-401a-a45d-78e6a7572496
	I1114 14:01:14.570306   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r9r5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"27ff4b01-cd10-4c7f-99c2-a0fe362d11ad","resourceVersion":"991","creationTimestamp":"2023-11-14T13:55:39Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:55:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5970 chars]
	I1114 14:01:14.767058   29270 request.go:629] Waited for 196.327452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:14.767114   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:14.767119   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:14.767127   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:14.767132   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:14.769948   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:14.769973   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:14.769982   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:14.769990   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:14 GMT
	I1114 14:01:14.769998   29270 round_trippers.go:580]     Audit-Id: 3784fda2-b86c-452c-85da-11ca4de126ac
	I1114 14:01:14.770005   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:14.770012   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:14.770018   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:14.770170   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"de2fa03b-c47f-4331-a423-2475d21c15ba","resourceVersion":"989","creationTimestamp":"2023-11-14T13:56:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:56:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3742 chars]
	I1114 14:01:14.770456   29270 pod_ready.go:97] node "multinode-661456-m03" hosting pod "kube-proxy-r9r5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456-m03" has status "Ready":"Unknown"
	I1114 14:01:14.770478   29270 pod_ready.go:81] duration metric: took 399.907944ms waiting for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	E1114 14:01:14.770498   29270 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-661456-m03" hosting pod "kube-proxy-r9r5l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-661456-m03" has status "Ready":"Unknown"
	I1114 14:01:14.770508   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:14.966952   29270 request.go:629] Waited for 196.377958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:01:14.967004   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:01:14.967009   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:14.967016   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:14.967022   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:14.969915   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:14.969941   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:14.969950   29270 round_trippers.go:580]     Audit-Id: 40498a55-9163-4128-aacb-097752c51c58
	I1114 14:01:14.969956   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:14.969962   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:14.969970   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:14.969978   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:14.969990   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:14 GMT
	I1114 14:01:14.970199   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-661456","namespace":"kube-system","uid":"16644b7a-7227-47b7-a06e-94b4dd7b0cce","resourceVersion":"879","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.mirror":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.seen":"2023-11-14T13:53:33.091290734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1114 14:01:15.166985   29270 request.go:629] Waited for 196.350596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:15.167058   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:01:15.167064   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:15.167074   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:15.167086   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:15.170264   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:15.170296   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:15.170307   29270 round_trippers.go:580]     Audit-Id: 625e1110-bc67-49a1-a31b-2612cced0eb0
	I1114 14:01:15.170316   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:15.170325   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:15.170334   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:15.170342   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:15.170352   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:15 GMT
	I1114 14:01:15.170941   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:01:15.171246   29270 pod_ready.go:92] pod "kube-scheduler-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:01:15.171263   29270 pod_ready.go:81] duration metric: took 400.742215ms waiting for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:01:15.171279   29270 pod_ready.go:38] duration metric: took 1.60165707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:01:15.171316   29270 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:01:15.171362   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:01:15.184914   29270 system_svc.go:56] duration metric: took 13.588988ms WaitForService to wait for kubelet.
	I1114 14:01:15.184945   29270 kubeadm.go:581] duration metric: took 10.145029385s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:01:15.184968   29270 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:01:15.366426   29270 request.go:629] Waited for 181.377939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I1114 14:01:15.366498   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I1114 14:01:15.366507   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:15.366519   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:15.366530   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:15.369544   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:15.369565   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:15.369575   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:15.369584   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:15 GMT
	I1114 14:01:15.369590   29270 round_trippers.go:580]     Audit-Id: d7078f18-6c07-4c93-999f-6b8eb876420c
	I1114 14:01:15.369598   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:15.369609   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:15.369619   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:15.370300   29270 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1013"},"items":[{"metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 14311 chars]
	I1114 14:01:15.370884   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:01:15.370902   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:01:15.370911   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:01:15.370915   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:01:15.370918   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:01:15.370924   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:01:15.370927   29270 node_conditions.go:105] duration metric: took 185.955505ms to run NodePressure ...
	I1114 14:01:15.370938   29270 start.go:228] waiting for startup goroutines ...
	I1114 14:01:15.370957   29270 start.go:242] writing updated cluster config ...
	I1114 14:01:15.371343   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:01:15.371417   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:01:15.374114   29270 out.go:177] * Starting worker node multinode-661456-m03 in cluster multinode-661456
	I1114 14:01:15.375297   29270 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 14:01:15.375318   29270 cache.go:56] Caching tarball of preloaded images
	I1114 14:01:15.375418   29270 preload.go:174] Found /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1114 14:01:15.375433   29270 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1114 14:01:15.375511   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:01:15.375672   29270 start.go:365] acquiring machines lock for multinode-661456-m03: {Name:mka8a7be0fef2cfa89eb7b4f7f1c7ded4441f603 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:01:15.375712   29270 start.go:369] acquired machines lock for "multinode-661456-m03" in 23.177µs
	I1114 14:01:15.375729   29270 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:01:15.375738   29270 fix.go:54] fixHost starting: m03
	I1114 14:01:15.375969   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:01:15.375992   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:01:15.389988   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33543
	I1114 14:01:15.390387   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:01:15.390856   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:01:15.390874   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:01:15.391223   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:01:15.391394   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:15.391520   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 14:01:15.393078   29270 fix.go:102] recreateIfNeeded on multinode-661456-m03: state=Stopped err=<nil>
	I1114 14:01:15.393106   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	W1114 14:01:15.393282   29270 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:01:15.395240   29270 out.go:177] * Restarting existing kvm2 VM for "multinode-661456-m03" ...
	I1114 14:01:15.396447   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .Start
	I1114 14:01:15.396612   29270 main.go:141] libmachine: (multinode-661456-m03) Ensuring networks are active...
	I1114 14:01:15.397281   29270 main.go:141] libmachine: (multinode-661456-m03) Ensuring network default is active
	I1114 14:01:15.397668   29270 main.go:141] libmachine: (multinode-661456-m03) Ensuring network mk-multinode-661456 is active
	I1114 14:01:15.398036   29270 main.go:141] libmachine: (multinode-661456-m03) Getting domain xml...
	I1114 14:01:15.398656   29270 main.go:141] libmachine: (multinode-661456-m03) Creating domain...
	I1114 14:01:16.647079   29270 main.go:141] libmachine: (multinode-661456-m03) Waiting to get IP...
	I1114 14:01:16.648150   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:16.648575   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:16.648653   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:16.648562   29720 retry.go:31] will retry after 273.904507ms: waiting for machine to come up
	I1114 14:01:16.924318   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:16.924929   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:16.924967   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:16.924902   29720 retry.go:31] will retry after 279.011719ms: waiting for machine to come up
	I1114 14:01:17.205362   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:17.205798   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:17.205828   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:17.205759   29720 retry.go:31] will retry after 329.800951ms: waiting for machine to come up
	I1114 14:01:17.537275   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:17.537780   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:17.537821   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:17.537737   29720 retry.go:31] will retry after 589.807212ms: waiting for machine to come up
	I1114 14:01:18.129502   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:18.129885   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:18.129914   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:18.129843   29720 retry.go:31] will retry after 629.644111ms: waiting for machine to come up
	I1114 14:01:18.760668   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:18.761089   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:18.761112   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:18.761057   29720 retry.go:31] will retry after 785.03ms: waiting for machine to come up
	I1114 14:01:19.548014   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:19.548406   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:19.548440   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:19.548356   29720 retry.go:31] will retry after 994.119142ms: waiting for machine to come up
	I1114 14:01:20.543807   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:20.544309   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:20.544349   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:20.544257   29720 retry.go:31] will retry after 1.006360411s: waiting for machine to come up
	I1114 14:01:21.552645   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:21.553112   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:21.553145   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:21.553064   29720 retry.go:31] will retry after 1.408372581s: waiting for machine to come up
	I1114 14:01:22.962564   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:22.963013   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:22.963031   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:22.962987   29720 retry.go:31] will retry after 1.646513265s: waiting for machine to come up
	I1114 14:01:24.610742   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:24.611179   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:24.611216   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:24.611130   29720 retry.go:31] will retry after 2.710226974s: waiting for machine to come up
	I1114 14:01:27.324263   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:27.324711   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:27.324741   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:27.324666   29720 retry.go:31] will retry after 3.138448032s: waiting for machine to come up
	I1114 14:01:30.464638   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:30.465142   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | unable to find current IP address of domain multinode-661456-m03 in network mk-multinode-661456
	I1114 14:01:30.465179   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | I1114 14:01:30.465083   29720 retry.go:31] will retry after 4.200726221s: waiting for machine to come up
	I1114 14:01:34.670455   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.670954   29270 main.go:141] libmachine: (multinode-661456-m03) Found IP for machine: 192.168.39.82
	I1114 14:01:34.670978   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has current primary IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.670985   29270 main.go:141] libmachine: (multinode-661456-m03) Reserving static IP address...
	I1114 14:01:34.671420   29270 main.go:141] libmachine: (multinode-661456-m03) Reserved static IP address: 192.168.39.82
	I1114 14:01:34.671443   29270 main.go:141] libmachine: (multinode-661456-m03) Waiting for SSH to be available...
	I1114 14:01:34.671467   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "multinode-661456-m03", mac: "52:54:00:07:c5:f5", ip: "192.168.39.82"} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:34.671490   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | skip adding static IP to network mk-multinode-661456 - found existing host DHCP lease matching {name: "multinode-661456-m03", mac: "52:54:00:07:c5:f5", ip: "192.168.39.82"}
	I1114 14:01:34.671504   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | Getting to WaitForSSH function...
	I1114 14:01:34.673664   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.673988   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:34.674028   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.674113   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | Using SSH client type: external
	I1114 14:01:34.674140   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa (-rw-------)
	I1114 14:01:34.674165   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:01:34.674175   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | About to run SSH command:
	I1114 14:01:34.674184   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | exit 0
	I1114 14:01:34.769424   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | SSH cmd err, output: <nil>: 
	I1114 14:01:34.769773   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetConfigRaw
	I1114 14:01:34.770325   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:01:34.772873   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.773253   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:34.773283   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.773582   29270 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/config.json ...
	I1114 14:01:34.773773   29270 machine.go:88] provisioning docker machine ...
	I1114 14:01:34.773792   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:34.774031   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetMachineName
	I1114 14:01:34.774189   29270 buildroot.go:166] provisioning hostname "multinode-661456-m03"
	I1114 14:01:34.774205   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetMachineName
	I1114 14:01:34.774354   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:34.776772   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.777143   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:34.777174   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.777323   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:34.777527   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:34.777669   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:34.777810   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:34.777909   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:34.778221   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:34.778235   29270 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-661456-m03 && echo "multinode-661456-m03" | sudo tee /etc/hostname
	I1114 14:01:34.924707   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-661456-m03
	
	I1114 14:01:34.924732   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:34.927625   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.928083   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:34.928116   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:34.928306   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:34.928521   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:34.928721   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:34.928890   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:34.929067   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:34.929499   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:34.929528   29270 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-661456-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-661456-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-661456-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:01:35.070321   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:01:35.070349   29270 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17581-6041/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-6041/.minikube}
	I1114 14:01:35.070365   29270 buildroot.go:174] setting up certificates
	I1114 14:01:35.070375   29270 provision.go:83] configureAuth start
	I1114 14:01:35.070388   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetMachineName
	I1114 14:01:35.070682   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:01:35.073069   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.073449   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.073477   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.073666   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:35.076264   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.076638   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.076668   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.076869   29270 provision.go:138] copyHostCerts
	I1114 14:01:35.076896   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 14:01:35.076936   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem, removing ...
	I1114 14:01:35.076949   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 14:01:35.077027   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem (1082 bytes)
	I1114 14:01:35.077126   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 14:01:35.077159   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem, removing ...
	I1114 14:01:35.077169   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 14:01:35.077205   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem (1123 bytes)
	I1114 14:01:35.077277   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 14:01:35.077300   29270 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem, removing ...
	I1114 14:01:35.077309   29270 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 14:01:35.077341   29270 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem (1675 bytes)
	I1114 14:01:35.077415   29270 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem org=jenkins.multinode-661456-m03 san=[192.168.39.82 192.168.39.82 localhost 127.0.0.1 minikube multinode-661456-m03]
	I1114 14:01:35.267161   29270 provision.go:172] copyRemoteCerts
	I1114 14:01:35.267224   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:01:35.267244   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:35.270003   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.270399   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.270431   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.270632   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:35.270828   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.270986   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:35.271123   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:01:35.370442   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 14:01:35.370520   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:01:35.395265   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 14:01:35.395329   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 14:01:35.418516   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 14:01:35.418572   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 14:01:35.441653   29270 provision.go:86] duration metric: configureAuth took 371.263872ms
	I1114 14:01:35.441680   29270 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:01:35.441876   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:01:35.441897   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:35.442164   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:35.444501   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.444828   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.444860   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.445028   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:35.445206   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.445373   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.445523   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:35.445705   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:35.446022   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:35.446033   29270 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1114 14:01:35.580199   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1114 14:01:35.580275   29270 buildroot.go:70] root file system type: tmpfs
	I1114 14:01:35.580393   29270 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1114 14:01:35.580423   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:35.583068   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.583427   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.583460   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.583612   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:35.583808   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.583990   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.584139   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:35.584312   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:35.584631   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:35.584701   29270 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.222"
	Environment="NO_PROXY=192.168.39.222,192.168.39.228"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1114 14:01:35.732917   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.222
	Environment=NO_PROXY=192.168.39.222,192.168.39.228
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1114 14:01:35.732950   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:35.735388   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.735807   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:35.735847   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:35.736012   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:35.736261   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.736471   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:35.736618   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:35.736790   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:35.737141   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:35.737171   29270 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1114 14:01:36.605581   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1114 14:01:36.605605   29270 machine.go:91] provisioned docker machine in 1.831819366s
	I1114 14:01:36.605617   29270 start.go:300] post-start starting for "multinode-661456-m03" (driver="kvm2")
	I1114 14:01:36.605630   29270 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:01:36.605647   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:36.605986   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:01:36.606013   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:36.608367   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.608726   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:36.608758   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.608972   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:36.609173   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:36.609340   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:36.609493   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:01:36.703229   29270 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:01:36.707528   29270 command_runner.go:130] > NAME=Buildroot
	I1114 14:01:36.707553   29270 command_runner.go:130] > VERSION=2021.02.12-1-gccdd192-dirty
	I1114 14:01:36.707559   29270 command_runner.go:130] > ID=buildroot
	I1114 14:01:36.707568   29270 command_runner.go:130] > VERSION_ID=2021.02.12
	I1114 14:01:36.707575   29270 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1114 14:01:36.707606   29270 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:01:36.707619   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/addons for local assets ...
	I1114 14:01:36.707680   29270 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/files for local assets ...
	I1114 14:01:36.707744   29270 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> 132382.pem in /etc/ssl/certs
	I1114 14:01:36.707753   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /etc/ssl/certs/132382.pem
	I1114 14:01:36.707874   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:01:36.716516   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:01:36.741827   29270 start.go:303] post-start completed in 136.193879ms
	I1114 14:01:36.741856   29270 fix.go:56] fixHost completed within 21.366116573s
	I1114 14:01:36.741881   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:36.744742   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.745134   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:36.745152   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.745307   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:36.745514   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:36.745672   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:36.745815   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:36.745969   29270 main.go:141] libmachine: Using SSH client type: native
	I1114 14:01:36.746282   29270 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1114 14:01:36.746293   29270 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:01:36.882428   29270 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699970496.831158732
	
	I1114 14:01:36.882452   29270 fix.go:206] guest clock: 1699970496.831158732
	I1114 14:01:36.882461   29270 fix.go:219] Guest: 2023-11-14 14:01:36.831158732 +0000 UTC Remote: 2023-11-14 14:01:36.741860759 +0000 UTC m=+124.137400410 (delta=89.297973ms)
	I1114 14:01:36.882477   29270 fix.go:190] guest clock delta is within tolerance: 89.297973ms
	I1114 14:01:36.882491   29270 start.go:83] releasing machines lock for "multinode-661456-m03", held for 21.506759733s
	I1114 14:01:36.882515   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:36.882769   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:01:36.885067   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.885529   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:36.885552   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.887507   29270 out.go:177] * Found network options:
	I1114 14:01:36.888822   29270 out.go:177]   - NO_PROXY=192.168.39.222,192.168.39.228
	W1114 14:01:36.890109   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	W1114 14:01:36.890133   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 14:01:36.890147   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:36.890671   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:36.890835   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .DriverName
	I1114 14:01:36.890888   29270 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:01:36.890936   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	W1114 14:01:36.891008   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	W1114 14:01:36.891035   29270 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 14:01:36.891104   29270 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:01:36.891124   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHHostname
	I1114 14:01:36.893567   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.893898   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.893934   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:36.893953   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.894080   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:36.894265   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:36.894362   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:36.894392   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:36.894412   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:36.894583   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:01:36.894656   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHPort
	I1114 14:01:36.894805   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHKeyPath
	I1114 14:01:36.894953   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetSSHUsername
	I1114 14:01:36.895076   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m03/id_rsa Username:docker}
	I1114 14:01:36.995610   29270 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1114 14:01:36.996155   29270 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:01:36.996214   29270 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:01:37.018740   29270 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 14:01:37.018788   29270 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1114 14:01:37.018807   29270 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:01:37.018818   29270 start.go:472] detecting cgroup driver to use...
	I1114 14:01:37.018910   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:01:37.036311   29270 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1114 14:01:37.036405   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 14:01:37.045728   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 14:01:37.054870   29270 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 14:01:37.054922   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 14:01:37.064543   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:01:37.073572   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 14:01:37.082458   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:01:37.091578   29270 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:01:37.101366   29270 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 14:01:37.110377   29270 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:01:37.118168   29270 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 14:01:37.118329   29270 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:01:37.126399   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:01:37.228400   29270 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 14:01:37.247015   29270 start.go:472] detecting cgroup driver to use...
	I1114 14:01:37.247104   29270 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1114 14:01:37.268468   29270 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1114 14:01:37.268490   29270 command_runner.go:130] > [Unit]
	I1114 14:01:37.268501   29270 command_runner.go:130] > Description=Docker Application Container Engine
	I1114 14:01:37.268510   29270 command_runner.go:130] > Documentation=https://docs.docker.com
	I1114 14:01:37.268516   29270 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1114 14:01:37.268521   29270 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1114 14:01:37.268528   29270 command_runner.go:130] > StartLimitBurst=3
	I1114 14:01:37.268532   29270 command_runner.go:130] > StartLimitIntervalSec=60
	I1114 14:01:37.268538   29270 command_runner.go:130] > [Service]
	I1114 14:01:37.268552   29270 command_runner.go:130] > Type=notify
	I1114 14:01:37.268562   29270 command_runner.go:130] > Restart=on-failure
	I1114 14:01:37.268571   29270 command_runner.go:130] > Environment=NO_PROXY=192.168.39.222
	I1114 14:01:37.268582   29270 command_runner.go:130] > Environment=NO_PROXY=192.168.39.222,192.168.39.228
	I1114 14:01:37.268596   29270 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1114 14:01:37.268611   29270 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1114 14:01:37.268623   29270 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1114 14:01:37.268641   29270 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1114 14:01:37.268657   29270 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1114 14:01:37.268663   29270 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1114 14:01:37.268674   29270 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1114 14:01:37.268683   29270 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1114 14:01:37.268689   29270 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1114 14:01:37.268693   29270 command_runner.go:130] > ExecStart=
	I1114 14:01:37.268707   29270 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1114 14:01:37.268714   29270 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1114 14:01:37.268723   29270 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1114 14:01:37.268736   29270 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1114 14:01:37.268749   29270 command_runner.go:130] > LimitNOFILE=infinity
	I1114 14:01:37.268759   29270 command_runner.go:130] > LimitNPROC=infinity
	I1114 14:01:37.268766   29270 command_runner.go:130] > LimitCORE=infinity
	I1114 14:01:37.268774   29270 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1114 14:01:37.268785   29270 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1114 14:01:37.268795   29270 command_runner.go:130] > TasksMax=infinity
	I1114 14:01:37.268801   29270 command_runner.go:130] > TimeoutStartSec=0
	I1114 14:01:37.268814   29270 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1114 14:01:37.268824   29270 command_runner.go:130] > Delegate=yes
	I1114 14:01:37.268847   29270 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1114 14:01:37.268861   29270 command_runner.go:130] > KillMode=process
	I1114 14:01:37.268867   29270 command_runner.go:130] > [Install]
	I1114 14:01:37.268874   29270 command_runner.go:130] > WantedBy=multi-user.target
	I1114 14:01:37.268944   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:01:37.281735   29270 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:01:37.310542   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:01:37.323514   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:01:37.335506   29270 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 14:01:37.366295   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:01:37.379338   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:01:37.396937   29270 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1114 14:01:37.397466   29270 ssh_runner.go:195] Run: which cri-dockerd
	I1114 14:01:37.401179   29270 command_runner.go:130] > /usr/bin/cri-dockerd
	I1114 14:01:37.401301   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1114 14:01:37.412504   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1114 14:01:37.428669   29270 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1114 14:01:37.540024   29270 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1114 14:01:37.654253   29270 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1114 14:01:37.655776   29270 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1114 14:01:37.673157   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:01:37.786656   29270 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1114 14:01:39.276878   29270 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.490187776s)
	I1114 14:01:39.276952   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:01:39.384815   29270 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1114 14:01:39.507191   29270 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:01:39.628812   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:01:39.748965   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1114 14:01:39.767478   29270 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:01:39.883947   29270 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1114 14:01:39.965153   29270 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1114 14:01:39.965218   29270 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1114 14:01:39.971209   29270 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1114 14:01:39.971231   29270 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 14:01:39.971238   29270 command_runner.go:130] > Device: 16h/22d	Inode: 827         Links: 1
	I1114 14:01:39.971244   29270 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1114 14:01:39.971251   29270 command_runner.go:130] > Access: 2023-11-14 14:01:39.847077004 +0000
	I1114 14:01:39.971256   29270 command_runner.go:130] > Modify: 2023-11-14 14:01:39.847077004 +0000
	I1114 14:01:39.971261   29270 command_runner.go:130] > Change: 2023-11-14 14:01:39.849077004 +0000
	I1114 14:01:39.971267   29270 command_runner.go:130] >  Birth: -
	I1114 14:01:39.971653   29270 start.go:540] Will wait 60s for crictl version
	I1114 14:01:39.971709   29270 ssh_runner.go:195] Run: which crictl
	I1114 14:01:39.975636   29270 command_runner.go:130] > /usr/bin/crictl
	I1114 14:01:39.975812   29270 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:01:40.029868   29270 command_runner.go:130] > Version:  0.1.0
	I1114 14:01:40.029897   29270 command_runner.go:130] > RuntimeName:  docker
	I1114 14:01:40.029904   29270 command_runner.go:130] > RuntimeVersion:  24.0.7
	I1114 14:01:40.030194   29270 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 14:01:40.031656   29270 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1114 14:01:40.031706   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:01:40.058730   29270 command_runner.go:130] > 24.0.7
	I1114 14:01:40.058845   29270 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:01:40.084059   29270 command_runner.go:130] > 24.0.7
	I1114 14:01:40.087491   29270 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1114 14:01:40.088902   29270 out.go:177]   - env NO_PROXY=192.168.39.222
	I1114 14:01:40.090406   29270 out.go:177]   - env NO_PROXY=192.168.39.222,192.168.39.228
	I1114 14:01:40.091790   29270 main.go:141] libmachine: (multinode-661456-m03) Calling .GetIP
	I1114 14:01:40.094613   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:40.094954   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:c5:f5", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 15:01:27 +0000 UTC Type:0 Mac:52:54:00:07:c5:f5 Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:multinode-661456-m03 Clientid:01:52:54:00:07:c5:f5}
	I1114 14:01:40.094995   29270 main.go:141] libmachine: (multinode-661456-m03) DBG | domain multinode-661456-m03 has defined IP address 192.168.39.82 and MAC address 52:54:00:07:c5:f5 in network mk-multinode-661456
	I1114 14:01:40.095174   29270 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:01:40.099473   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:01:40.112733   29270 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456 for IP: 192.168.39.82
	I1114 14:01:40.112761   29270 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3fe4539ce9ed96ff0e979200082f9548591da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:01:40.112922   29270 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key
	I1114 14:01:40.112987   29270 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key
	I1114 14:01:40.113005   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 14:01:40.113022   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 14:01:40.113039   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 14:01:40.113056   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 14:01:40.113121   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem (1338 bytes)
	W1114 14:01:40.113166   29270 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238_empty.pem, impossibly tiny 0 bytes
	I1114 14:01:40.113182   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem (1679 bytes)
	I1114 14:01:40.113219   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:01:40.113252   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:01:40.113286   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem (1675 bytes)
	I1114 14:01:40.113344   29270 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:01:40.113385   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> /usr/share/ca-certificates/132382.pem
	I1114 14:01:40.113406   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:01:40.113425   29270 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem -> /usr/share/ca-certificates/13238.pem
	I1114 14:01:40.113781   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:01:40.139096   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1114 14:01:40.163278   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:01:40.187850   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:01:40.212406   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /usr/share/ca-certificates/132382.pem (1708 bytes)
	I1114 14:01:40.237878   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:01:40.262800   29270 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem --> /usr/share/ca-certificates/13238.pem (1338 bytes)
	I1114 14:01:40.286850   29270 ssh_runner.go:195] Run: openssl version
	I1114 14:01:40.292450   29270 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1114 14:01:40.292663   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132382.pem && ln -fs /usr/share/ca-certificates/132382.pem /etc/ssl/certs/132382.pem"
	I1114 14:01:40.303599   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132382.pem
	I1114 14:01:40.310003   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 14:01:40.310037   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 14:01:40.310086   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132382.pem
	I1114 14:01:40.315781   29270 command_runner.go:130] > 3ec20f2e
	I1114 14:01:40.315955   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132382.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:01:40.326295   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:01:40.336892   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:01:40.342066   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:01:40.342331   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:01:40.342392   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:01:40.348248   29270 command_runner.go:130] > b5213941
	I1114 14:01:40.348320   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:01:40.359937   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13238.pem && ln -fs /usr/share/ca-certificates/13238.pem /etc/ssl/certs/13238.pem"
	I1114 14:01:40.371019   29270 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13238.pem
	I1114 14:01:40.376159   29270 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 14:01:40.376224   29270 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 14:01:40.376279   29270 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13238.pem
	I1114 14:01:40.382140   29270 command_runner.go:130] > 51391683
	I1114 14:01:40.382209   29270 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13238.pem /etc/ssl/certs/51391683.0"
	I1114 14:01:40.393361   29270 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:01:40.398094   29270 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:01:40.398133   29270 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:01:40.398211   29270 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1114 14:01:40.423668   29270 command_runner.go:130] > cgroupfs
	I1114 14:01:40.423797   29270 cni.go:84] Creating CNI manager for ""
	I1114 14:01:40.423812   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 14:01:40.423823   29270 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:01:40.423850   29270 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-661456 NodeName:multinode-661456-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:01:40.423983   29270 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-661456-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:01:40.424067   29270 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-661456-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:01:40.424126   29270 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:01:40.434638   29270 command_runner.go:130] > kubeadm
	I1114 14:01:40.434666   29270 command_runner.go:130] > kubectl
	I1114 14:01:40.434674   29270 command_runner.go:130] > kubelet
	I1114 14:01:40.434697   29270 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:01:40.434761   29270 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 14:01:40.444201   29270 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1114 14:01:40.460791   29270 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:01:40.477124   29270 ssh_runner.go:195] Run: grep 192.168.39.222	control-plane.minikube.internal$ /etc/hosts
	I1114 14:01:40.480933   29270 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:01:40.493011   29270 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:01:40.493316   29270 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:01:40.493412   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:01:40.493472   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:01:40.509618   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39151
	I1114 14:01:40.510088   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:01:40.510610   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:01:40.510632   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:01:40.510910   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:01:40.511157   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:01:40.511322   29270 start.go:304] JoinCluster: &{Name:multinode-661456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.3 ClusterName:multinode-661456 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.222 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.228 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:01:40.511493   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 14:01:40.511514   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:01:40.514318   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:01:40.514765   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:01:40.514798   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:01:40.514894   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:01:40.515064   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:01:40.515207   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:01:40.515407   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:01:40.693011   29270 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token boiu83.v1msfviqagofty9o --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 
	I1114 14:01:40.693082   29270 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:40.693126   29270 host.go:66] Checking if "multinode-661456" exists ...
	I1114 14:01:40.693600   29270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:01:40.693656   29270 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:01:40.708083   29270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I1114 14:01:40.708541   29270 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:01:40.709112   29270 main.go:141] libmachine: Using API Version  1
	I1114 14:01:40.709139   29270 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:01:40.709496   29270 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:01:40.709716   29270 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 14:01:40.709920   29270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-661456-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1114 14:01:40.709948   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 14:01:40.712719   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:01:40.713138   29270 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:57:24 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 14:01:40.713173   29270 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 14:01:40.713380   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 14:01:40.713554   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 14:01:40.713714   29270 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 14:01:40.713854   29270 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 14:01:40.941545   29270 command_runner.go:130] > node/multinode-661456-m03 cordoned
	I1114 14:01:43.984914   29270 command_runner.go:130] > pod "busybox-5bc68d56bd-lqtzt" has DeletionTimestamp older than 1 seconds, skipping
	I1114 14:01:43.984942   29270 command_runner.go:130] > node/multinode-661456-m03 drained
	I1114 14:01:43.986819   29270 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1114 14:01:43.986844   29270 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-9nvmm, kube-system/kube-proxy-r9r5l
	I1114 14:01:43.986900   29270 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl drain multinode-661456-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.276941252s)
	I1114 14:01:43.986926   29270 node.go:108] successfully drained node "m03"
	I1114 14:01:43.987277   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:43.987483   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:43.987715   29270 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1114 14:01:43.987759   29270 round_trippers.go:463] DELETE https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:43.987773   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:43.987780   29270 round_trippers.go:473]     Content-Type: application/json
	I1114 14:01:43.987785   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:43.987792   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:43.997496   29270 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1114 14:01:43.997527   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:43.997538   29270 round_trippers.go:580]     Audit-Id: 0a543f00-9b94-4342-a4ef-eaa52a85f5c2
	I1114 14:01:43.997544   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:43.997549   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:43.997554   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:43.997559   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:43.997564   29270 round_trippers.go:580]     Content-Length: 171
	I1114 14:01:43.997569   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:43 GMT
	I1114 14:01:43.997589   29270 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-661456-m03","kind":"nodes","uid":"de2fa03b-c47f-4331-a423-2475d21c15ba"}}
	I1114 14:01:43.997617   29270 node.go:124] successfully deleted node "m03"
	I1114 14:01:43.997626   29270 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:43.997645   29270 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:43.997662   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token boiu83.v1msfviqagofty9o --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-661456-m03"
	I1114 14:01:44.115880   29270 command_runner.go:130] ! W1114 14:01:44.063980    1158 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1114 14:01:44.399201   29270 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:01:46.122674   29270 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 14:01:46.122702   29270 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 14:01:46.122713   29270 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 14:01:46.122730   29270 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:01:46.122743   29270 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:01:46.122752   29270 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 14:01:46.122763   29270 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 14:01:46.122781   29270 command_runner.go:130] > This node has joined the cluster:
	I1114 14:01:46.122797   29270 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 14:01:46.122811   29270 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 14:01:46.122823   29270 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 14:01:46.122848   29270 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token boiu83.v1msfviqagofty9o --discovery-token-ca-cert-hash sha256:3d0753b1fc9b8f69eb568b4419207eb7ec90f54c0fdfe0ded5ab09203d39de66 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-661456-m03": (2.12516951s)
	I1114 14:01:46.122872   29270 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 14:01:46.366378   29270 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1114 14:01:46.366426   29270 start.go:306] JoinCluster complete in 5.855104376s
	I1114 14:01:46.366439   29270 cni.go:84] Creating CNI manager for ""
	I1114 14:01:46.366445   29270 cni.go:136] 3 nodes found, recommending kindnet
	I1114 14:01:46.366507   29270 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 14:01:46.372493   29270 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 14:01:46.372522   29270 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1114 14:01:46.372536   29270 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1114 14:01:46.372546   29270 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:01:46.372556   29270 command_runner.go:130] > Access: 2023-11-14 13:59:45.502880393 +0000
	I1114 14:01:46.372565   29270 command_runner.go:130] > Modify: 2023-11-11 02:04:07.000000000 +0000
	I1114 14:01:46.372573   29270 command_runner.go:130] > Change: 2023-11-14 13:59:43.657880393 +0000
	I1114 14:01:46.372584   29270 command_runner.go:130] >  Birth: -
	I1114 14:01:46.372629   29270 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 14:01:46.372646   29270 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 14:01:46.391455   29270 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 14:01:46.680143   29270 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:01:46.684350   29270 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:01:46.688452   29270 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 14:01:46.700415   29270 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 14:01:46.703766   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:46.703968   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:46.704239   29270 round_trippers.go:463] GET https://192.168.39.222:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:01:46.704252   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:46.704259   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:46.704265   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:46.707121   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:46.707141   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:46.707148   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:46.707158   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:46.707163   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:46.707168   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:46.707174   29270 round_trippers.go:580]     Content-Length: 291
	I1114 14:01:46.707179   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:46 GMT
	I1114 14:01:46.707185   29270 round_trippers.go:580]     Audit-Id: 6eb4f058-3067-4752-bc7e-2461aa48189d
	I1114 14:01:46.707203   29270 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9d8407fd-076f-444c-a235-0048e6022d7e","resourceVersion":"906","creationTimestamp":"2023-11-14T13:53:32Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 14:01:46.707279   29270 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-661456" context rescaled to 1 replicas
	I1114 14:01:46.707309   29270 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1114 14:01:46.710093   29270 out.go:177] * Verifying Kubernetes components...
	I1114 14:01:46.711514   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:01:46.725776   29270 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:01:46.726041   29270 kapi.go:59] client config for multinode-661456: &rest.Config{Host:"https://192.168.39.222:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/profiles/multinode-661456/client.key", CAFile:"/home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c236c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:01:46.726369   29270 node_ready.go:35] waiting up to 6m0s for node "multinode-661456-m03" to be "Ready" ...
	I1114 14:01:46.726454   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:46.726465   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:46.726475   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:46.726485   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:46.729109   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:46.729126   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:46.729135   29270 round_trippers.go:580]     Content-Length: 3861
	I1114 14:01:46.729142   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:46 GMT
	I1114 14:01:46.729149   29270 round_trippers.go:580]     Audit-Id: 1b955468-8600-4999-9e17-c70e45e70d51
	I1114 14:01:46.729162   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:46.729170   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:46.729178   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:46.729185   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:46.729377   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1092","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 2837 chars]
	I1114 14:01:46.729731   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:46.729749   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:46.729762   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:46.729772   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:46.732234   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:46.732251   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:46.732258   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:46.732263   29270 round_trippers.go:580]     Content-Length: 3861
	I1114 14:01:46.732268   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:46 GMT
	I1114 14:01:46.732274   29270 round_trippers.go:580]     Audit-Id: 0197a4b2-0979-4675-a73a-5b76c2f3c2c7
	I1114 14:01:46.732279   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:46.732284   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:46.732288   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:46.732398   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1092","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 2837 chars]
	I1114 14:01:47.233494   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:47.233527   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:47.233545   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:47.233554   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:47.236279   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:47.236302   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:47.236313   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:47.236322   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:47.236331   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:47.236343   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:47 GMT
	I1114 14:01:47.236352   29270 round_trippers.go:580]     Audit-Id: 7b2b1de6-4304-4665-a07c-3d3ce116bb71
	I1114 14:01:47.236358   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:47.236366   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:47.236464   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:47.733512   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:47.733537   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:47.733548   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:47.733558   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:47.737321   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:47.737352   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:47.737362   29270 round_trippers.go:580]     Audit-Id: 4609725d-1b78-4e28-a663-bd1542189aad
	I1114 14:01:47.737371   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:47.737379   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:47.737386   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:47.737394   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:47.737403   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:47.737411   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:47 GMT
	I1114 14:01:47.737533   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:48.233139   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:48.233163   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:48.233173   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:48.233182   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:48.235781   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:48.235808   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:48.235818   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:48.235827   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:48.235835   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:48 GMT
	I1114 14:01:48.235843   29270 round_trippers.go:580]     Audit-Id: 3f9efb2b-f39c-46d7-942d-cc7c330a3b4e
	I1114 14:01:48.235851   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:48.235859   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:48.235869   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:48.235939   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:48.733564   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:48.733585   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:48.733593   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:48.733600   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:48.737529   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:48.737552   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:48.737560   29270 round_trippers.go:580]     Audit-Id: d69e174d-18c6-4413-9f7e-2c73a034b605
	I1114 14:01:48.737565   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:48.737570   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:48.737575   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:48.737591   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:48.737597   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:48.737602   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:48 GMT
	I1114 14:01:48.737679   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:48.737906   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:01:49.233828   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:49.233851   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:49.233859   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:49.233865   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:49.236641   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:49.236660   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:49.236666   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:49 GMT
	I1114 14:01:49.236672   29270 round_trippers.go:580]     Audit-Id: 77eb35bf-6875-40eb-8963-fc6ba9e926fa
	I1114 14:01:49.236677   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:49.236682   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:49.236687   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:49.236692   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:49.236701   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:49.236760   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:49.733323   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:49.733350   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:49.733359   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:49.733364   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:49.736899   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:49.736925   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:49.736936   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:49.736944   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:49.736950   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:49.736955   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:49.736960   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:49 GMT
	I1114 14:01:49.736965   29270 round_trippers.go:580]     Audit-Id: e4d5a15f-5dcd-405f-956a-c401ec5e8b4c
	I1114 14:01:49.736970   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:49.737168   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:50.233483   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:50.233513   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:50.233522   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:50.233531   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:50.237550   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:01:50.237576   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:50.237585   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:50.237594   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:50.237600   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:50.237607   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:50 GMT
	I1114 14:01:50.237614   29270 round_trippers.go:580]     Audit-Id: 10c00471-35ca-4c35-b5c5-a1e9a796a488
	I1114 14:01:50.237622   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:50.237630   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:50.237730   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:50.733379   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:50.733406   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:50.733416   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:50.733425   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:50.737491   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:01:50.737512   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:50.737519   29270 round_trippers.go:580]     Audit-Id: e2f8ecff-bde8-4054-8dc1-72d6dddf9da8
	I1114 14:01:50.737525   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:50.737530   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:50.737538   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:50.737546   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:50.737560   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:50.737571   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:50 GMT
	I1114 14:01:50.737646   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:51.232920   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:51.232945   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:51.232953   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:51.232959   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:51.236420   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:51.236446   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:51.236457   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:51 GMT
	I1114 14:01:51.236472   29270 round_trippers.go:580]     Audit-Id: 11bb1992-c9fd-4343-932a-da1d77220b36
	I1114 14:01:51.236480   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:51.236487   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:51.236493   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:51.236498   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:51.236506   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:51.236577   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:51.236901   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:01:51.733053   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:51.733077   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:51.733085   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:51.733091   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:51.738681   29270 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 14:01:51.738709   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:51.738719   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:51 GMT
	I1114 14:01:51.738745   29270 round_trippers.go:580]     Audit-Id: 86bab566-ea49-476b-be24-1fd9ae9245d2
	I1114 14:01:51.738756   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:51.738765   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:51.738774   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:51.738789   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:51.738797   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:51.738895   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:52.233345   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:52.233371   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:52.233380   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:52.233385   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:52.237275   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:52.237298   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:52.237305   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:52.237310   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:52.237315   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:52.237320   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:52.237335   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:52 GMT
	I1114 14:01:52.237340   29270 round_trippers.go:580]     Audit-Id: f1ee262f-fe81-499a-853c-68ea76312db2
	I1114 14:01:52.237345   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:52.237407   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:52.733551   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:52.733576   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:52.733586   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:52.733595   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:52.736816   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:52.736855   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:52.736867   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:52.736874   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:52.736879   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:52.736885   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:52.736891   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:52.736896   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:52 GMT
	I1114 14:01:52.736902   29270 round_trippers.go:580]     Audit-Id: 8ebe4d00-c0bc-4289-855e-f67154745cc4
	I1114 14:01:52.736999   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:53.232886   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:53.232913   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:53.232923   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:53.232931   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:53.235703   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:53.235723   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:53.235730   29270 round_trippers.go:580]     Audit-Id: 006e3fde-5fa9-4ae8-a00f-d0f696859752
	I1114 14:01:53.235736   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:53.235741   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:53.235746   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:53.235751   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:53.235756   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:53.235763   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:53 GMT
	I1114 14:01:53.235821   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:53.733027   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:53.733054   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:53.733063   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:53.733069   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:53.736736   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:53.736759   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:53.736766   29270 round_trippers.go:580]     Audit-Id: c65db9e5-fce0-46b6-a0b5-10418ebe425b
	I1114 14:01:53.736771   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:53.736780   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:53.736785   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:53.736790   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:53.736795   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:53.736800   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:53 GMT
	I1114 14:01:53.736882   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:53.737176   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:01:54.233059   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:54.233095   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:54.233108   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:54.233114   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:54.236196   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:54.236220   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:54.236231   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:54 GMT
	I1114 14:01:54.236241   29270 round_trippers.go:580]     Audit-Id: c1125b05-32de-4ccb-920c-6ad454c3c31f
	I1114 14:01:54.236250   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:54.236258   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:54.236266   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:54.236272   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:54.236280   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:54.236355   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:54.732908   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:54.732934   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:54.732943   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:54.732949   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:54.735788   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:54.735841   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:54.735852   29270 round_trippers.go:580]     Audit-Id: 9f1bb6b7-e64c-471c-9ca5-d575f9ed99b2
	I1114 14:01:54.735860   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:54.735873   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:54.735883   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:54.735890   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:54.735901   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:54.735913   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:54 GMT
	I1114 14:01:54.736002   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:55.233589   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:55.233615   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:55.233628   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:55.233658   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:55.239023   29270 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1114 14:01:55.239048   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:55.239055   29270 round_trippers.go:580]     Content-Length: 3970
	I1114 14:01:55.239060   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:55 GMT
	I1114 14:01:55.239069   29270 round_trippers.go:580]     Audit-Id: e4edf7d6-6cd0-4c2f-b0df-3630abe3b9bd
	I1114 14:01:55.239074   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:55.239079   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:55.239084   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:55.239089   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:55.239267   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1094","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 2946 chars]
	I1114 14:01:55.733301   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:55.733328   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:55.733342   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:55.733351   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:55.736233   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:55.736258   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:55.736267   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:55.736275   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:55.736283   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:55 GMT
	I1114 14:01:55.736291   29270 round_trippers.go:580]     Audit-Id: 2167df78-7220-442a-840c-68a4a8094f9a
	I1114 14:01:55.736298   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:55.736307   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:55.736449   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:56.233024   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:56.233049   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:56.233057   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:56.233063   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:56.236451   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:56.236472   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:56.236479   29270 round_trippers.go:580]     Audit-Id: c8d81f27-3911-4b1a-a238-086510b4f506
	I1114 14:01:56.236484   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:56.236489   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:56.236494   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:56.236499   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:56.236504   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:56 GMT
	I1114 14:01:56.237081   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:56.237321   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:01:56.733786   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:56.733809   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:56.733817   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:56.733823   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:56.737121   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:56.737148   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:56.737158   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:56.737166   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:56.737173   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:56.737180   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:56 GMT
	I1114 14:01:56.737187   29270 round_trippers.go:580]     Audit-Id: 33c779ee-65f8-41ad-b48b-54e3b1d7cdbc
	I1114 14:01:56.737195   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:56.737683   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:57.233335   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:57.233365   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:57.233378   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:57.233388   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:57.235951   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:57.235977   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:57.235988   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:57.235995   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:57.236000   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:57 GMT
	I1114 14:01:57.236005   29270 round_trippers.go:580]     Audit-Id: 102ae7a9-b9ad-43ae-ac59-a6d507f79021
	I1114 14:01:57.236010   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:57.236015   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:57.236160   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:57.733276   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:57.733298   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:57.733306   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:57.733312   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:57.739621   29270 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:01:57.739652   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:57.739660   29270 round_trippers.go:580]     Audit-Id: a2246dde-5b15-43da-97ce-b69f0f4fcbf3
	I1114 14:01:57.739671   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:57.739680   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:57.739689   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:57.739698   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:57.739707   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:57 GMT
	I1114 14:01:57.739806   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:58.233827   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:58.233853   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:58.233862   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:58.233867   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:58.236542   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:58.236562   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:58.236570   29270 round_trippers.go:580]     Audit-Id: 9c4fab17-191a-44d2-9562-0bb0e985af34
	I1114 14:01:58.236576   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:58.236585   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:58.236593   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:58.236602   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:58.236610   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:58 GMT
	I1114 14:01:58.236877   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:58.733770   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:58.733795   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:58.733803   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:58.733809   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:58.736866   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:58.736883   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:58.736890   29270 round_trippers.go:580]     Audit-Id: c4045015-a1e5-4752-9e34-4436921c8562
	I1114 14:01:58.736896   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:58.736901   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:58.736907   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:58.736914   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:58.736923   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:58 GMT
	I1114 14:01:58.737536   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:58.737795   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:01:59.233563   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:59.233583   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:59.233590   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:59.233596   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:59.236206   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:01:59.236228   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:59.236240   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:59.236247   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:59.236254   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:59.236262   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:59.236278   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:59 GMT
	I1114 14:01:59.236286   29270 round_trippers.go:580]     Audit-Id: 9460ad3a-f54d-474f-8cd0-3d698af06fb9
	I1114 14:01:59.236464   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:01:59.733091   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:01:59.733116   29270 round_trippers.go:469] Request Headers:
	I1114 14:01:59.733130   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:01:59.733136   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:01:59.736423   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:01:59.736455   29270 round_trippers.go:577] Response Headers:
	I1114 14:01:59.736465   29270 round_trippers.go:580]     Audit-Id: e42b2f42-7c60-48bf-868e-2c0ed80bc84d
	I1114 14:01:59.736475   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:01:59.736483   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:01:59.736494   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:01:59.736503   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:01:59.736512   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:01:59 GMT
	I1114 14:01:59.736848   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:00.233624   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:00.233658   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:00.233671   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:00.233680   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:00.236543   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:00.236571   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:00.236582   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:00.236590   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:00.236599   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:00.236607   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:00 GMT
	I1114 14:02:00.236615   29270 round_trippers.go:580]     Audit-Id: a15f7bb0-1758-4ada-9349-f0a941e40569
	I1114 14:02:00.236631   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:00.236795   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:00.733833   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:00.733858   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:00.733865   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:00.733871   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:00.736852   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:00.736873   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:00.736881   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:00.736887   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:00 GMT
	I1114 14:02:00.736892   29270 round_trippers.go:580]     Audit-Id: c18ca877-4239-4d59-872c-5a5a55bb8d0a
	I1114 14:02:00.736897   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:00.736902   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:00.736907   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:00.737177   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:01.232847   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:01.232874   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:01.232883   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:01.232889   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:01.235759   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:01.235786   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:01.235795   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:01.235801   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:01.235814   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:01.235822   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:01 GMT
	I1114 14:02:01.235835   29270 round_trippers.go:580]     Audit-Id: e9011afb-8681-4b29-822f-83ef65b75487
	I1114 14:02:01.235847   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:01.236107   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:01.236343   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:02:01.733856   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:01.733887   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:01.733900   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:01.733909   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:01.737064   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:01.737084   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:01.737090   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:01.737095   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:01.737100   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:01 GMT
	I1114 14:02:01.737105   29270 round_trippers.go:580]     Audit-Id: 1bb462b0-ec7d-475a-8fb9-32e083c35aa3
	I1114 14:02:01.737110   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:01.737115   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:01.737607   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:02.232897   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:02.232923   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:02.232932   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:02.232937   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:02.235842   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:02.235865   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:02.235872   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:02.235882   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:02 GMT
	I1114 14:02:02.235889   29270 round_trippers.go:580]     Audit-Id: 96c66a79-fa47-402b-8744-fb20eb22229a
	I1114 14:02:02.235897   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:02.235904   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:02.235911   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:02.236006   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:02.732926   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:02.732953   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:02.732961   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:02.732968   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:02.736084   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:02.736115   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:02.736126   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:02.736135   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:02 GMT
	I1114 14:02:02.736143   29270 round_trippers.go:580]     Audit-Id: 8c42a607-0e2d-475d-a9a6-880ed228196d
	I1114 14:02:02.736151   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:02.736160   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:02.736167   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:02.736262   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:03.233325   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:03.233359   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:03.233371   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:03.233380   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:03.236325   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:03.236346   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:03.236353   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:03.236358   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:03.236363   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:03.236368   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:03.236373   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:03 GMT
	I1114 14:02:03.236378   29270 round_trippers.go:580]     Audit-Id: 712c1242-3f75-432b-8990-41248f3111a2
	I1114 14:02:03.236569   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:03.236875   29270 node_ready.go:58] node "multinode-661456-m03" has status "Ready":"False"
	I1114 14:02:03.733386   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:03.733466   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:03.733486   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:03.733495   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:03.736931   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:03.736957   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:03.736966   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:03.736977   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:03.736984   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:03.736993   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:03 GMT
	I1114 14:02:03.737000   29270 round_trippers.go:580]     Audit-Id: a0f05da7-00d9-421f-b89b-681085fa01a4
	I1114 14:02:03.737007   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:03.737104   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:04.233009   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:04.233035   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:04.233043   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:04.233049   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:04.235932   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:04.235965   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:04.235976   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:04.235985   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:04.235994   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:04.236002   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:04.236010   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:04 GMT
	I1114 14:02:04.236018   29270 round_trippers.go:580]     Audit-Id: 4cff4655-c4a1-47b8-a034-a6438a4c1700
	I1114 14:02:04.236108   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:04.733691   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:04.733716   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:04.733724   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:04.733730   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:04.737070   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:04.737092   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:04.737099   29270 round_trippers.go:580]     Audit-Id: 9bca4bb6-12a7-4d5d-9d7b-45867d00b06d
	I1114 14:02:04.737104   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:04.737112   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:04.737117   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:04.737122   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:04.737127   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:04 GMT
	I1114 14:02:04.737763   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1109","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVe [truncated 3338 chars]
	I1114 14:02:05.233396   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:05.233422   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.233446   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.233455   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.236169   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:05.236195   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.236209   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.236218   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.236231   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.236246   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.236255   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.236261   29270 round_trippers.go:580]     Audit-Id: e25780a4-b629-4ed2-86ee-5fa376362315
	I1114 14:02:05.236457   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1126","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3204 chars]
	I1114 14:02:05.236798   29270 node_ready.go:49] node "multinode-661456-m03" has status "Ready":"True"
	I1114 14:02:05.236822   29270 node_ready.go:38] duration metric: took 18.510429661s waiting for node "multinode-661456-m03" to be "Ready" ...
	I1114 14:02:05.236835   29270 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:02:05.236913   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods
	I1114 14:02:05.236925   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.236936   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.236948   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.240910   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:05.240927   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.240935   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.240944   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.240953   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.240965   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.240985   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.240993   29270 round_trippers.go:580]     Audit-Id: 71d7205e-69ec-483b-a66d-2c54e2a024c0
	I1114 14:02:05.242273   29270 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1126"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 83379 chars]
	I1114 14:02:05.244636   29270 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.244724   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kvb7v
	I1114 14:02:05.244734   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.244741   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.244747   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.247457   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:05.247471   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.247477   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.247483   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.247490   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.247499   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.247510   29270 round_trippers.go:580]     Audit-Id: fd657466-8015-4ccb-a0dc-1e3bea91d7f7
	I1114 14:02:05.247522   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.247678   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kvb7v","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"b9c9a98f-d025-408a-ada2-0c19a356b4b9","resourceVersion":"902","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3aaeb3f8-a71a-421f-a9d9-2a77c883295b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3aaeb3f8-a71a-421f-a9d9-2a77c883295b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1114 14:02:05.248061   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:05.248073   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.248080   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.248086   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.250073   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:02:05.250097   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.250106   29270 round_trippers.go:580]     Audit-Id: 5ee83ad2-2be5-4a00-8c7c-6fe14b27fb94
	I1114 14:02:05.250112   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.250117   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.250125   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.250130   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.250137   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.250302   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:05.250569   29270 pod_ready.go:92] pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:05.250583   29270 pod_ready.go:81] duration metric: took 5.926079ms waiting for pod "coredns-5dd5756b68-kvb7v" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.250591   29270 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.250636   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-661456
	I1114 14:02:05.250643   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.250650   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.250655   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.253342   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:05.253368   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.253375   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.253380   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.253385   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.253390   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.253395   29270 round_trippers.go:580]     Audit-Id: 17a1ca77-4bc7-4db5-aa64-921dae294e30
	I1114 14:02:05.253403   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.253514   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-661456","namespace":"kube-system","uid":"a7fc10f1-0274-4c69-9ce0-a962bdfb4e17","resourceVersion":"890","creationTimestamp":"2023-11-14T13:53:30Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.222:2379","kubernetes.io/config.hash":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.mirror":"9e92b05ce6d5e91e18d34c8472e5d273","kubernetes.io/config.seen":"2023-11-14T13:53:24.984306855Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6081 chars]
	I1114 14:02:05.253877   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:05.253892   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.253899   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.253905   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.257198   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:05.257219   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.257228   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.257236   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.257246   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.257258   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.257270   29270 round_trippers.go:580]     Audit-Id: cc7eea80-c4dc-4104-b871-fe84d9e2b333
	I1114 14:02:05.257278   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.257473   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:05.257852   29270 pod_ready.go:92] pod "etcd-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:05.257872   29270 pod_ready.go:81] duration metric: took 7.273064ms waiting for pod "etcd-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.257893   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.257958   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-661456
	I1114 14:02:05.257969   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.257980   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.257990   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.262609   29270 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:02:05.262625   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.262632   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.262638   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.262646   29270 round_trippers.go:580]     Audit-Id: 47898765-1efc-4901-a270-952cb860f703
	I1114 14:02:05.262654   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.262663   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.262675   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.262874   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-661456","namespace":"kube-system","uid":"85c4ecc0-d6c3-46ba-a099-ba93cb0fac2e","resourceVersion":"877","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.222:8443","kubernetes.io/config.hash":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.mirror":"53c7ea94508e5c77038361438391a9cf","kubernetes.io/config.seen":"2023-11-14T13:53:33.091288385Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7615 chars]
	I1114 14:02:05.263381   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:05.263401   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.263412   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.263424   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.266508   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:05.266528   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.266535   29270 round_trippers.go:580]     Audit-Id: 92e1f386-57d4-49bc-a217-a5ec8c703601
	I1114 14:02:05.266541   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.266546   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.266555   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.266561   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.266569   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.266703   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:05.267076   29270 pod_ready.go:92] pod "kube-apiserver-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:05.267099   29270 pod_ready.go:81] duration metric: took 9.193673ms waiting for pod "kube-apiserver-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.267113   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.267180   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-661456
	I1114 14:02:05.267191   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.267202   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.267212   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.269687   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:05.269705   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.269711   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.269718   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.269724   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.269729   29270 round_trippers.go:580]     Audit-Id: bf618b1b-510c-42da-aba7-4f89dae468b5
	I1114 14:02:05.269736   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.269744   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.269940   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-661456","namespace":"kube-system","uid":"503c91d5-280b-44ab-8801-da2418e2bf6c","resourceVersion":"875","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.mirror":"53acd8cd74cbb0cff5dbf435dc1b4fe3","kubernetes.io/config.seen":"2023-11-14T13:53:33.091289647Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7178 chars]
	I1114 14:02:05.270428   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:05.270443   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.270450   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.270455   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.272381   29270 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:02:05.272394   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.272400   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.272405   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.272410   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.272415   29270 round_trippers.go:580]     Audit-Id: 95e670a2-3f71-45a0-b9f2-d99fe43d4b42
	I1114 14:02:05.272420   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.272425   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.272590   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:05.272952   29270 pod_ready.go:92] pod "kube-controller-manager-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:05.272976   29270 pod_ready.go:81] duration metric: took 5.849767ms waiting for pod "kube-controller-manager-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.272989   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.434387   29270 request.go:629] Waited for 161.340902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:02:05.434457   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkj7d
	I1114 14:02:05.434462   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.434469   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.434477   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.437468   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:05.437489   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.437495   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.437501   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.437508   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.437516   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.437523   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.437531   29270 round_trippers.go:580]     Audit-Id: f45dd0d3-173d-4078-9901-e2bf4d36fe3c
	I1114 14:02:05.437718   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fkj7d","generateName":"kube-proxy-","namespace":"kube-system","uid":"5d920620-7354-4418-a44e-c7f2965d75a4","resourceVersion":"980","creationTimestamp":"2023-11-14T13:54:42Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:54:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I1114 14:02:05.633458   29270 request.go:629] Waited for 195.284951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:02:05.633526   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m02
	I1114 14:02:05.633533   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.633543   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.633551   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.636923   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:05.636944   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.636951   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.636956   29270 round_trippers.go:580]     Audit-Id: ff08abc8-52bc-4c4a-857c-157a126359ea
	I1114 14:02:05.636962   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.636967   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.636973   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.636981   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.637102   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m02","uid":"7a8037c2-711e-4184-bd86-c54a55a140ac","resourceVersion":"1020","creationTimestamp":"2023-11-14T14:01:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3254 chars]
	I1114 14:02:05.637355   29270 pod_ready.go:92] pod "kube-proxy-fkj7d" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:05.637375   29270 pod_ready.go:81] duration metric: took 364.377739ms waiting for pod "kube-proxy-fkj7d" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.637388   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:05.833855   29270 request.go:629] Waited for 196.385081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:02:05.833932   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ndrhk
	I1114 14:02:05.833939   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:05.833950   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:05.833959   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:05.837057   29270 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:02:05.837076   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:05.837085   29270 round_trippers.go:580]     Audit-Id: a782f0be-b159-42db-8ffe-09e92f796c67
	I1114 14:02:05.837093   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:05.837101   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:05.837108   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:05.837115   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:05.837123   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:05 GMT
	I1114 14:02:05.837379   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-ndrhk","generateName":"kube-proxy-","namespace":"kube-system","uid":"a11d15a6-5476-429f-ae29-445fa22f70dd","resourceVersion":"794","creationTimestamp":"2023-11-14T13:53:45Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1114 14:02:06.034225   29270 request.go:629] Waited for 196.355185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:06.034291   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:06.034298   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:06.034312   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:06.034326   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:06.037084   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:06.037110   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:06.037119   29270 round_trippers.go:580]     Audit-Id: a1b99270-199c-4401-80cb-3b040577263b
	I1114 14:02:06.037128   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:06.037135   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:06.037142   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:06.037148   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:06.037156   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:06 GMT
	I1114 14:02:06.037410   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:06.037718   29270 pod_ready.go:92] pod "kube-proxy-ndrhk" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:06.037737   29270 pod_ready.go:81] duration metric: took 400.340253ms waiting for pod "kube-proxy-ndrhk" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:06.037750   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:06.234209   29270 request.go:629] Waited for 196.394932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:02:06.234279   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r9r5l
	I1114 14:02:06.234284   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:06.234292   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:06.234297   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:06.237148   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:06.237171   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:06.237180   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:06.237188   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:06.237196   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:06 GMT
	I1114 14:02:06.237204   29270 round_trippers.go:580]     Audit-Id: e3554f79-2fe7-4f7e-951e-f835a9949e0d
	I1114 14:02:06.237211   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:06.237218   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:06.237598   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r9r5l","generateName":"kube-proxy-","namespace":"kube-system","uid":"27ff4b01-cd10-4c7f-99c2-a0fe362d11ad","resourceVersion":"1101","creationTimestamp":"2023-11-14T13:55:39Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8feb3a9f-acf6-44be-b014-f7ba9b8cce85","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:55:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8feb3a9f-acf6-44be-b014-f7ba9b8cce85\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5746 chars]
	I1114 14:02:06.434373   29270 request.go:629] Waited for 196.374393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:06.434431   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456-m03
	I1114 14:02:06.434447   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:06.434454   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:06.434460   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:06.437114   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:06.437138   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:06.437148   29270 round_trippers.go:580]     Audit-Id: 988a9845-7957-4663-8b6b-e1a63a929544
	I1114 14:02:06.437157   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:06.437165   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:06.437173   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:06.437180   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:06.437187   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:06 GMT
	I1114 14:02:06.437540   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456-m03","uid":"a0543214-88fc-4341-bbb0-1ce3bf229e7a","resourceVersion":"1126","creationTimestamp":"2023-11-14T14:01:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:01:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3204 chars]
	I1114 14:02:06.437771   29270 pod_ready.go:92] pod "kube-proxy-r9r5l" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:06.437824   29270 pod_ready.go:81] duration metric: took 400.06632ms waiting for pod "kube-proxy-r9r5l" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:06.437833   29270 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:06.634302   29270 request.go:629] Waited for 196.404734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:02:06.634374   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-661456
	I1114 14:02:06.634382   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:06.634394   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:06.634400   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:06.637274   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:06.637295   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:06.637305   29270 round_trippers.go:580]     Audit-Id: 9820d097-6fe4-4980-8946-5cbffa5419c8
	I1114 14:02:06.637313   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:06.637320   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:06.637327   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:06.637335   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:06.637343   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:06 GMT
	I1114 14:02:06.638115   29270 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-661456","namespace":"kube-system","uid":"16644b7a-7227-47b7-a06e-94b4dd7b0cce","resourceVersion":"879","creationTimestamp":"2023-11-14T13:53:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.mirror":"6486319bbce275a5a99514fbfdfe01ab","kubernetes.io/config.seen":"2023-11-14T13:53:33.091290734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T13:53:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4908 chars]
	I1114 14:02:06.833911   29270 request.go:629] Waited for 195.397366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:06.833993   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes/multinode-661456
	I1114 14:02:06.834001   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:06.834010   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:06.834022   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:06.836916   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:06.836940   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:06.836949   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:06.836958   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:06 GMT
	I1114 14:02:06.836965   29270 round_trippers.go:580]     Audit-Id: 89e4a39a-aeaf-4228-85ee-9bc8c1377dd0
	I1114 14:02:06.836973   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:06.836980   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:06.836987   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:06.837356   29270 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-11-14T13:53:29Z","fieldsType":"FieldsV1","fi [truncated 5157 chars]
	I1114 14:02:06.837655   29270 pod_ready.go:92] pod "kube-scheduler-multinode-661456" in "kube-system" namespace has status "Ready":"True"
	I1114 14:02:06.837669   29270 pod_ready.go:81] duration metric: took 399.830303ms waiting for pod "kube-scheduler-multinode-661456" in "kube-system" namespace to be "Ready" ...
	I1114 14:02:06.837679   29270 pod_ready.go:38] duration metric: took 1.60082995s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:02:06.837696   29270 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:02:06.837739   29270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:02:06.852355   29270 system_svc.go:56] duration metric: took 14.650327ms WaitForService to wait for kubelet.
	I1114 14:02:06.852378   29270 kubeadm.go:581] duration metric: took 20.14504306s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:02:06.852395   29270 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:02:07.033827   29270 request.go:629] Waited for 181.368599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.222:8443/api/v1/nodes
	I1114 14:02:07.033882   29270 round_trippers.go:463] GET https://192.168.39.222:8443/api/v1/nodes
	I1114 14:02:07.033886   29270 round_trippers.go:469] Request Headers:
	I1114 14:02:07.033894   29270 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:02:07.033899   29270 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1114 14:02:07.036888   29270 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:02:07.036913   29270 round_trippers.go:577] Response Headers:
	I1114 14:02:07.036922   29270 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:02:07 GMT
	I1114 14:02:07.036928   29270 round_trippers.go:580]     Audit-Id: 96ef467a-7b0a-4cfa-9d11-ad80f4ae7588
	I1114 14:02:07.036934   29270 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:02:07.036947   29270 round_trippers.go:580]     Content-Type: application/json
	I1114 14:02:07.036953   29270 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0b25b538-2c7b-4d86-9881-d8e31341f944
	I1114 14:02:07.036958   29270 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0c3f44d-1e3f-40e7-9811-c5be27875636
	I1114 14:02:07.037314   29270 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1132"},"items":[{"metadata":{"name":"multinode-661456","uid":"7c6ecc82-59ef-405e-b216-ff853692fed6","resourceVersion":"874","creationTimestamp":"2023-11-14T13:53:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-661456","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-661456","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T13_53_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 13533 chars]
	I1114 14:02:07.037846   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:02:07.037865   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:02:07.037874   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:02:07.037878   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:02:07.037882   29270 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:02:07.037886   29270 node_conditions.go:123] node cpu capacity is 2
	I1114 14:02:07.037890   29270 node_conditions.go:105] duration metric: took 185.491406ms to run NodePressure ...
	I1114 14:02:07.037901   29270 start.go:228] waiting for startup goroutines ...
	I1114 14:02:07.037918   29270 start.go:242] writing updated cluster config ...
	I1114 14:02:07.038207   29270 ssh_runner.go:195] Run: rm -f paused
	I1114 14:02:07.086149   29270 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 14:02:07.088679   29270 out.go:177] * Done! kubectl is now configured to use "multinode-661456" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-11-14 13:59:44 UTC, ends at Tue 2023-11-14 14:02:08 UTC. --
	Nov 14 14:00:15 multinode-661456 dockerd[829]: time="2023-11-14T14:00:15.655243754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:15 multinode-661456 dockerd[829]: time="2023-11-14T14:00:15.655255694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:18 multinode-661456 cri-dockerd[1027]: time="2023-11-14T14:00:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1e6538ac63afca984747d1621743aaf3e31f83d5a599e9ef8b9d3880a4128ef/resolv.conf as [nameserver 192.168.122.1]"
	Nov 14 14:00:18 multinode-661456 dockerd[829]: time="2023-11-14T14:00:18.500098535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:00:18 multinode-661456 dockerd[829]: time="2023-11-14T14:00:18.500191304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:18 multinode-661456 dockerd[829]: time="2023-11-14T14:00:18.500219247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:18 multinode-661456 dockerd[829]: time="2023-11-14T14:00:18.500745004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.284353511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.284745721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.284768606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.284862218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.288011234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.288306495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.288321900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:30 multinode-661456 dockerd[829]: time="2023-11-14T14:00:30.288330293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:30 multinode-661456 cri-dockerd[1027]: time="2023-11-14T14:00:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/768e3826276801859d29bf8f13dd8494ce85e9b3056d22eea6c3b19f69938cb7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Nov 14 14:00:30 multinode-661456 cri-dockerd[1027]: time="2023-11-14T14:00:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c26d434dab9f2d710054f61a44d5d6230273d5abcb7853a8d50383299ee01340/resolv.conf as [nameserver 192.168.122.1]"
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.044973240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.047460970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.047557474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.047708495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.069237878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.069290170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.069304454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:00:31 multinode-661456 dockerd[829]: time="2023-11-14T14:00:31.069315482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1bc8dc3133edc       ead0a4a53df89                                                                                         About a minute ago   Running             coredns                   1                   c26d434dab9f2       coredns-5dd5756b68-kvb7v
	d8296681d76b8       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   768e382627680       busybox-5bc68d56bd-wrrkq
	4ec3f1c15860f       c7d1297425461                                                                                         About a minute ago   Running             kindnet-cni               1                   a1e6538ac63af       kindnet-fjpnd
	8e99c36de3274       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       1                   904e6e287ec9f       storage-provisioner
	ad1470624c554       bfc896cf80fba                                                                                         About a minute ago   Running             kube-proxy                1                   c0f75731baa85       kube-proxy-ndrhk
	a710d718d1b10       73deb9a3f7025                                                                                         About a minute ago   Running             etcd                      1                   f6ef08865c00f       etcd-multinode-661456
	9ec5492bba05d       6d1b4fd1b182d                                                                                         About a minute ago   Running             kube-scheduler            1                   39d59596c0830       kube-scheduler-multinode-661456
	b83b43bdf4e17       10baa1ca17068                                                                                         About a minute ago   Running             kube-controller-manager   1                   ddeae37f0ebd5       kube-controller-manager-multinode-661456
	981ae77038f8c       5374347291230                                                                                         About a minute ago   Running             kube-apiserver            1                   7716cf6798ae4       kube-apiserver-multinode-661456
	589c6fa9c4c85       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 minutes ago        Exited              busybox                   0                   234ee83a0de94       busybox-5bc68d56bd-wrrkq
	feeae8ba92005       6e38f40d628db                                                                                         8 minutes ago        Exited              storage-provisioner       0                   2bd3f30d7d310       storage-provisioner
	bab0d7f33070c       ead0a4a53df89                                                                                         8 minutes ago        Exited              coredns                   0                   e19ac901a1617       coredns-5dd5756b68-kvb7v
	8fd11e5a867b4       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              8 minutes ago        Exited              kindnet-cni               0                   7d18371e20a91       kindnet-fjpnd
	1f4caae4ccd2b       bfc896cf80fba                                                                                         8 minutes ago        Exited              kube-proxy                0                   b41d405d05c1b       kube-proxy-ndrhk
	0ce64485a2de7       73deb9a3f7025                                                                                         8 minutes ago        Exited              etcd                      0                   7965b783edc41       etcd-multinode-661456
	510016ba0c81e       6d1b4fd1b182d                                                                                         8 minutes ago        Exited              kube-scheduler            0                   fc0ae24f94d2b       kube-scheduler-multinode-661456
	8f35d19c847d2       10baa1ca17068                                                                                         8 minutes ago        Exited              kube-controller-manager   0                   a72669bfefa37       kube-controller-manager-multinode-661456
	4037c5756e5b2       5374347291230                                                                                         8 minutes ago        Exited              kube-apiserver            0                   749b09a9ecc05       kube-apiserver-multinode-661456
	
	* 
	* ==> coredns [1bc8dc3133ed] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47501 - 45720 "HINFO IN 1123662381113262830.4188409696581440950. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028667523s
	
	* 
	* ==> coredns [bab0d7f33070] <==
	* [INFO] 10.244.0.4:44733 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00160043s
	[INFO] 10.244.0.4:58773 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077633s
	[INFO] 10.244.0.4:47587 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000032098s
	[INFO] 10.244.0.4:45532 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001056786s
	[INFO] 10.244.0.4:46505 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123764s
	[INFO] 10.244.0.4:36309 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000427s
	[INFO] 10.244.0.4:34363 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034895s
	[INFO] 10.244.1.2:40167 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147286s
	[INFO] 10.244.1.2:51587 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000218209s
	[INFO] 10.244.1.2:45120 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156302s
	[INFO] 10.244.1.2:46069 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105743s
	[INFO] 10.244.0.4:56479 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000226714s
	[INFO] 10.244.0.4:55171 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076104s
	[INFO] 10.244.0.4:42989 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095312s
	[INFO] 10.244.0.4:56806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129479s
	[INFO] 10.244.1.2:40419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000195227s
	[INFO] 10.244.1.2:41268 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000243544s
	[INFO] 10.244.1.2:33587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017508s
	[INFO] 10.244.1.2:43258 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000255594s
	[INFO] 10.244.0.4:41864 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143454s
	[INFO] 10.244.0.4:39602 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093115s
	[INFO] 10.244.0.4:57567 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078055s
	[INFO] 10.244.0.4:49229 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071275s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-661456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-661456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=multinode-661456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_53_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:53:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-661456
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:02:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:00:24 +0000   Tue, 14 Nov 2023 13:53:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:00:24 +0000   Tue, 14 Nov 2023 13:53:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:00:24 +0000   Tue, 14 Nov 2023 13:53:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:00:24 +0000   Tue, 14 Nov 2023 14:00:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    multinode-661456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2466158166854468821317b24d4f81f6
	  System UUID:                24661581-6685-4468-8213-17b24d4f81f6
	  Boot ID:                    372bd1ff-5b3a-49de-8e87-bb06626ab82e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wrrkq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 coredns-5dd5756b68-kvb7v                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m24s
	  kube-system                 etcd-multinode-661456                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kindnet-fjpnd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m24s
	  kube-system                 kube-apiserver-multinode-661456             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-controller-manager-multinode-661456    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 kube-proxy-ndrhk                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-scheduler-multinode-661456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m20s                  kube-proxy       
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m44s (x8 over 8m44s)  kubelet          Node multinode-661456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x8 over 8m44s)  kubelet          Node multinode-661456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x7 over 8m44s)  kubelet          Node multinode-661456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m36s                  kubelet          Node multinode-661456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m36s                  kubelet          Node multinode-661456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m36s                  kubelet          Node multinode-661456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m36s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m24s                  node-controller  Node multinode-661456 event: Registered Node multinode-661456 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-661456 status is now: NodeReady
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)        kubelet          Node multinode-661456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)        kubelet          Node multinode-661456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)        kubelet          Node multinode-661456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                   node-controller  Node multinode-661456 event: Registered Node multinode-661456 in Controller
	
	
	Name:               multinode-661456-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-661456-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:01:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-661456-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:02:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:01:13 +0000   Tue, 14 Nov 2023 14:01:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:01:13 +0000   Tue, 14 Nov 2023 14:01:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:01:13 +0000   Tue, 14 Nov 2023 14:01:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:01:13 +0000   Tue, 14 Nov 2023 14:01:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    multinode-661456-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 568fd760035f4bd9a7d2df2d354d0210
	  System UUID:                568fd760-035f-4bd9-a7d2-df2d354d0210
	  Boot ID:                    09f76ff0-5a03-4bfc-9c7b-4e08db324224
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-blwt7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kindnet-8rqgf               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-proxy-fkj7d            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m19s                  kube-proxy  
	  Normal  Starting                 64s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m27s (x5 over 7m29s)  kubelet     Node multinode-661456-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x5 over 7m29s)  kubelet     Node multinode-661456-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x5 over 7m29s)  kubelet     Node multinode-661456-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m11s                  kubelet     Node multinode-661456-m02 status is now: NodeReady
	  Normal  Starting                 67s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s (x2 over 67s)      kubelet     Node multinode-661456-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x2 over 67s)      kubelet     Node multinode-661456-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x2 over 67s)      kubelet     Node multinode-661456-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  66s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                56s                    kubelet     Node multinode-661456-m02 status is now: NodeReady
	
	
	Name:               multinode-661456-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-661456-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-661456-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:02:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:02:04 +0000   Tue, 14 Nov 2023 14:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:02:04 +0000   Tue, 14 Nov 2023 14:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:02:04 +0000   Tue, 14 Nov 2023 14:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:02:04 +0000   Tue, 14 Nov 2023 14:02:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    multinode-661456-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 8ca1a21a6cb443c8bf4b1fafc5cd3642
	  System UUID:                8ca1a21a-6cb4-43c8-bf4b-1fafc5cd3642
	  Boot ID:                    ee2599fb-912d-4fc3-9633-d9cd672e6b29
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9nvmm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m30s
	  kube-system                 kube-proxy-r9r5l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m22s                  kube-proxy  
	  Normal  Starting                 21s                    kube-proxy  
	  Normal  Starting                 5m35s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m30s (x5 over 6m31s)  kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x5 over 6m31s)  kubelet     Node multinode-661456-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s (x5 over 6m31s)  kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m14s                  kubelet     Node multinode-661456-m03 status is now: NodeReady
	  Normal  Starting                 5m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     5m37s (x2 over 5m37s)  kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m37s (x2 over 5m37s)  kubelet     Node multinode-661456-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m37s (x2 over 5m37s)  kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-661456-m03 status is now: NodeReady
	  Normal  Starting                 25s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x2 over 25s)      kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x2 over 25s)      kubelet     Node multinode-661456-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x2 over 25s)      kubelet     Node multinode-661456-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5s                     kubelet     Node multinode-661456-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov14 13:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067469] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.354162] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.291671] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133302] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.474766] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.240741] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.106380] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +1.174826] systemd-fstab-generator[752]: Ignoring "noauto" for root device
	[  +0.275395] systemd-fstab-generator[790]: Ignoring "noauto" for root device
	[  +0.102836] systemd-fstab-generator[801]: Ignoring "noauto" for root device
	[  +0.128533] systemd-fstab-generator[814]: Ignoring "noauto" for root device
	[  +1.689235] systemd-fstab-generator[972]: Ignoring "noauto" for root device
	[  +0.109298] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.112465] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +0.119477] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +0.138247] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[Nov14 14:00] systemd-fstab-generator[1268]: Ignoring "noauto" for root device
	[  +0.420615] kauditd_printk_skb: 67 callbacks suppressed
	[ +17.721828] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [0ce64485a2de] <==
	* {"level":"info","ts":"2023-11-14T13:53:28.13846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T13:53:28.138466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-11-14T13:53:28.138499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 2"}
	{"level":"info","ts":"2023-11-14T13:53:28.138509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-11-14T13:53:28.141508Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:53:28.142727Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-661456 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T13:53:28.142943Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:53:28.145413Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T13:53:28.145499Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T13:53:28.145565Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:53:28.145642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:53:28.145685Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:53:28.145697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:53:28.147456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2023-11-14T13:53:28.161453Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T13:56:43.7153Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-14T13:56:43.71544Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-661456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	{"level":"warn","ts":"2023-11-14T13:56:43.715585Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:56:43.715685Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.222:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:56:43.716706Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:56:43.716862Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-14T13:56:43.748337Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d8a7e113a49009a2","current-leader-member-id":"d8a7e113a49009a2"}
	{"level":"info","ts":"2023-11-14T13:56:43.752006Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-11-14T13:56:43.752234Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-11-14T13:56:43.752262Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-661456","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"]}
	
	* 
	* ==> etcd [a710d718d1b1] <==
	* {"level":"info","ts":"2023-11-14T14:00:11.74953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"26257d506d5fabfb","local-member-id":"d8a7e113a49009a2","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:00:11.750676Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:00:11.749054Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:00:11.759512Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:00:11.759528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:00:11.761284Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"d8a7e113a49009a2","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-11-14T14:00:11.762278Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T14:00:11.762639Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d8a7e113a49009a2","initial-advertise-peer-urls":["https://192.168.39.222:2380"],"listen-peer-urls":["https://192.168.39.222:2380"],"advertise-client-urls":["https://192.168.39.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T14:00:11.76269Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T14:00:11.763112Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-11-14T14:00:11.763148Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.222:2380"}
	{"level":"info","ts":"2023-11-14T14:00:11.919002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-14T14:00:11.919067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T14:00:11.919084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgPreVoteResp from d8a7e113a49009a2 at term 2"}
	{"level":"info","ts":"2023-11-14T14:00:11.919095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T14:00:11.9191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 received MsgVoteResp from d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2023-11-14T14:00:11.919108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d8a7e113a49009a2 became leader at term 3"}
	{"level":"info","ts":"2023-11-14T14:00:11.919114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d8a7e113a49009a2 elected leader d8a7e113a49009a2 at term 3"}
	{"level":"info","ts":"2023-11-14T14:00:11.928125Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d8a7e113a49009a2","local-member-attributes":"{Name:multinode-661456 ClientURLs:[https://192.168.39.222:2379]}","request-path":"/0/members/d8a7e113a49009a2/attributes","cluster-id":"26257d506d5fabfb","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T14:00:11.928599Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T14:00:11.928794Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T14:00:11.9317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.222:2379"}
	{"level":"info","ts":"2023-11-14T14:00:11.93197Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T14:00:11.932015Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T14:00:11.933391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  14:02:09 up 2 min,  0 users,  load average: 0.37, 0.25, 0.10
	Linux multinode-661456 5.10.57 #1 SMP Sat Nov 11 01:15:44 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [4ec3f1c15860] <==
	* I1114 14:01:29.751807       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 14:01:29.751835       1 main.go:227] handling current node
	I1114 14:01:29.751844       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 14:01:29.751849       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 14:01:29.752100       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 14:01:29.752109       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.3.0/24] 
	I1114 14:01:39.773811       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 14:01:39.773868       1 main.go:227] handling current node
	I1114 14:01:39.773938       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 14:01:39.773947       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 14:01:39.774317       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 14:01:39.774355       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.3.0/24] 
	I1114 14:01:49.864429       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 14:01:49.865015       1 main.go:227] handling current node
	I1114 14:01:49.865639       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 14:01:49.866041       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 14:01:49.866805       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 14:01:49.867047       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.2.0/24] 
	I1114 14:01:49.867450       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.82 Flags: [] Table: 0} 
	I1114 14:01:59.879683       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 14:01:59.880223       1 main.go:227] handling current node
	I1114 14:01:59.880315       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 14:01:59.880558       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 14:01:59.881176       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 14:01:59.881392       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kindnet [8fd11e5a867b] <==
	* I1114 13:56:05.667050       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 13:56:05.667156       1 main.go:227] handling current node
	I1114 13:56:05.667176       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 13:56:05.667182       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 13:56:05.667359       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 13:56:05.667365       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.2.0/24] 
	I1114 13:56:15.685330       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 13:56:15.685617       1 main.go:227] handling current node
	I1114 13:56:15.685881       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 13:56:15.686011       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 13:56:15.686337       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 13:56:15.686482       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.2.0/24] 
	I1114 13:56:25.692899       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 13:56:25.692945       1 main.go:227] handling current node
	I1114 13:56:25.692957       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 13:56:25.692963       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 13:56:25.693206       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 13:56:25.693240       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.2.0/24] 
	I1114 13:56:35.706803       1 main.go:223] Handling node with IPs: map[192.168.39.222:{}]
	I1114 13:56:35.706862       1 main.go:227] handling current node
	I1114 13:56:35.706874       1 main.go:223] Handling node with IPs: map[192.168.39.228:{}]
	I1114 13:56:35.706880       1 main.go:250] Node multinode-661456-m02 has CIDR [10.244.1.0/24] 
	I1114 13:56:35.707015       1 main.go:223] Handling node with IPs: map[192.168.39.82:{}]
	I1114 13:56:35.707021       1 main.go:250] Node multinode-661456-m03 has CIDR [10.244.3.0/24] 
	I1114 13:56:35.707216       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.39.82 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [4037c5756e5b] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 13:56:53.677318       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 13:56:53.739320       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1114 13:56:53.762198       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [981ae77038f8] <==
	* I1114 14:00:14.020251       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1114 14:00:14.042076       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1114 14:00:14.042248       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1114 14:00:14.140348       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1114 14:00:14.140421       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1114 14:00:14.155279       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 14:00:14.205258       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1114 14:00:14.205369       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 14:00:14.205390       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 14:00:14.208733       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 14:00:14.218072       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 14:00:14.223083       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 14:00:14.223498       1 aggregator.go:166] initial CRD sync complete...
	I1114 14:00:14.223533       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 14:00:14.223540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 14:00:14.223545       1 cache.go:39] Caches are synced for autoregister controller
	E1114 14:00:14.234321       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1114 14:00:15.023443       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 14:00:16.678728       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 14:00:16.821810       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 14:00:16.832545       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 14:00:16.909389       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 14:00:16.917745       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 14:00:26.878241       1 controller.go:624] quota admission added evaluator for: endpoints
	I1114 14:00:26.931326       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [8f35d19c847d] <==
	* I1114 13:54:58.890994       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 13:55:01.239652       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1114 13:55:01.256547       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-tx7cv"
	I1114 13:55:01.283749       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wrrkq"
	I1114 13:55:01.297449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.561298ms"
	I1114 13:55:01.314582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.064679ms"
	I1114 13:55:01.356436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.74948ms"
	I1114 13:55:01.356536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.481µs"
	I1114 13:55:03.570686       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.363922ms"
	I1114 13:55:03.571274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.692µs"
	I1114 13:55:04.598472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.009377ms"
	I1114 13:55:04.598563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.535µs"
	I1114 13:55:39.763145       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 13:55:39.770381       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-661456-m03\" does not exist"
	I1114 13:55:39.782443       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-661456-m03" podCIDRs=["10.244.2.0/24"]
	I1114 13:55:39.817801       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r9r5l"
	I1114 13:55:39.817854       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9nvmm"
	I1114 13:55:40.636403       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-661456-m03"
	I1114 13:55:40.636483       1 event.go:307] "Event occurred" object="multinode-661456-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-661456-m03 event: Registered Node multinode-661456-m03 in Controller"
	I1114 13:55:55.735227       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 13:56:31.266237       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 13:56:32.167493       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-661456-m03\" does not exist"
	I1114 13:56:32.167687       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 13:56:32.197433       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-661456-m03" podCIDRs=["10.244.3.0/24"]
	I1114 13:56:40.325338       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	
	* 
	* ==> kube-controller-manager [b83b43bdf4e1] <==
	* I1114 14:01:06.809786       1 event.go:307] "Event occurred" object="kube-system/kindnet-9nvmm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1114 14:01:06.828844       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-r9r5l" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1114 14:01:13.393804       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 14:01:16.024363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="679.017µs"
	I1114 14:01:16.201837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="85.027µs"
	I1114 14:01:16.209854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.885µs"
	I1114 14:01:16.210654       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-tx7cv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-tx7cv"
	I1114 14:01:40.985756       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-blwt7"
	I1114 14:01:41.000724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.839806ms"
	I1114 14:01:41.000844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.9µs"
	I1114 14:01:41.017358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.278667ms"
	I1114 14:01:41.018970       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="143.302µs"
	I1114 14:01:41.025649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.889µs"
	I1114 14:01:42.345124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.872601ms"
	I1114 14:01:42.345323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="128.656µs"
	I1114 14:01:43.992246       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 14:01:44.893547       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-661456-m03\" does not exist"
	I1114 14:01:44.895628       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m02"
	I1114 14:01:44.896214       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-lqtzt" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-lqtzt"
	I1114 14:01:44.918680       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-661456-m03" podCIDRs=["10.244.2.0/24"]
	I1114 14:01:45.766132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.296µs"
	I1114 14:01:45.968562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="95.892µs"
	I1114 14:01:45.981104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="104.346µs"
	I1114 14:01:45.986777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.689µs"
	I1114 14:02:05.019196       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-661456-m03"
	
	* 
	* ==> kube-proxy [1f4caae4ccd2] <==
	* I1114 13:53:48.277901       1 server_others.go:69] "Using iptables proxy"
	I1114 13:53:48.296960       1 node.go:141] Successfully retrieved node IP: 192.168.39.222
	I1114 13:53:48.365146       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 13:53:48.365170       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 13:53:48.369228       1 server_others.go:152] "Using iptables Proxier"
	I1114 13:53:48.370504       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 13:53:48.371190       1 server.go:846] "Version info" version="v1.28.3"
	I1114 13:53:48.371207       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:53:48.373467       1 config.go:97] "Starting endpoint slice config controller"
	I1114 13:53:48.373773       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 13:53:48.373877       1 config.go:188] "Starting service config controller"
	I1114 13:53:48.373888       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 13:53:48.375333       1 config.go:315] "Starting node config controller"
	I1114 13:53:48.375372       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 13:53:48.474531       1 shared_informer.go:318] Caches are synced for service config
	I1114 13:53:48.474605       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 13:53:48.476035       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [ad1470624c55] <==
	* I1114 14:00:15.353972       1 server_others.go:69] "Using iptables proxy"
	I1114 14:00:15.384137       1 node.go:141] Successfully retrieved node IP: 192.168.39.222
	I1114 14:00:15.451483       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1114 14:00:15.451533       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1114 14:00:15.454174       1 server_others.go:152] "Using iptables Proxier"
	I1114 14:00:15.455065       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 14:00:15.455592       1 server.go:846] "Version info" version="v1.28.3"
	I1114 14:00:15.455632       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 14:00:15.458030       1 config.go:188] "Starting service config controller"
	I1114 14:00:15.458584       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 14:00:15.458643       1 config.go:97] "Starting endpoint slice config controller"
	I1114 14:00:15.458649       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 14:00:15.462284       1 config.go:315] "Starting node config controller"
	I1114 14:00:15.462326       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 14:00:15.559162       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 14:00:15.559234       1 shared_informer.go:318] Caches are synced for service config
	I1114 14:00:15.562788       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [510016ba0c81] <==
	* E1114 13:53:29.692772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 13:53:30.659795       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:53:30.660388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 13:53:30.708186       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:53:30.708236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 13:53:30.717229       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 13:53:30.717280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 13:53:30.745166       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 13:53:30.745584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 13:53:30.770697       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:53:30.770804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 13:53:30.856581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:53:30.856825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 13:53:30.883623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 13:53:30.883752       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 13:53:30.903569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:53:30.903865       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 13:53:30.964945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:53:30.965205       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 13:53:31.048879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:53:31.048904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 13:53:31.159667       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:53:31.159695       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 13:53:33.660499       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1114 13:56:43.642371       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [9ec5492bba05] <==
	* I1114 14:00:12.247872       1 serving.go:348] Generated self-signed cert in-memory
	W1114 14:00:14.065954       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 14:00:14.066006       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 14:00:14.066018       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 14:00:14.066025       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 14:00:14.118303       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 14:00:14.118356       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 14:00:14.128571       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 14:00:14.132528       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 14:00:14.132701       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 14:00:14.136485       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 14:00:14.251859       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 13:59:44 UTC, ends at Tue 2023-11-14 14:02:09 UTC. --
	Nov 14 14:00:17 multinode-661456 kubelet[1274]: E1114 14:00:17.899222    1274 projected.go:198] Error preparing data for projected volume kube-api-access-qdhlw for pod default/busybox-5bc68d56bd-wrrkq: object "default"/"kube-root-ca.crt" not registered
	Nov 14 14:00:17 multinode-661456 kubelet[1274]: E1114 14:00:17.899345    1274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/961b5072-649a-4ba6-856b-7d2ceb595b86-kube-api-access-qdhlw podName:961b5072-649a-4ba6-856b-7d2ceb595b86 nodeName:}" failed. No retries permitted until 2023-11-14 14:00:21.89932703 +0000 UTC m=+13.005713916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qdhlw" (UniqueName: "kubernetes.io/projected/961b5072-649a-4ba6-856b-7d2ceb595b86-kube-api-access-qdhlw") pod "busybox-5bc68d56bd-wrrkq" (UID: "961b5072-649a-4ba6-856b-7d2ceb595b86") : object "default"/"kube-root-ca.crt" not registered
	Nov 14 14:00:18 multinode-661456 kubelet[1274]: I1114 14:00:18.412157    1274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e6538ac63afca984747d1621743aaf3e31f83d5a599e9ef8b9d3880a4128ef"
	Nov 14 14:00:19 multinode-661456 kubelet[1274]: E1114 14:00:19.377104    1274 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Nov 14 14:00:19 multinode-661456 kubelet[1274]: E1114 14:00:19.457253    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-wrrkq" podUID="961b5072-649a-4ba6-856b-7d2ceb595b86"
	Nov 14 14:00:19 multinode-661456 kubelet[1274]: E1114 14:00:19.459454    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-kvb7v" podUID="b9c9a98f-d025-408a-ada2-0c19a356b4b9"
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.297619    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-wrrkq" podUID="961b5072-649a-4ba6-856b-7d2ceb595b86"
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.298336    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-kvb7v" podUID="b9c9a98f-d025-408a-ada2-0c19a356b4b9"
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.830147    1274 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.830268    1274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b9c9a98f-d025-408a-ada2-0c19a356b4b9-config-volume podName:b9c9a98f-d025-408a-ada2-0c19a356b4b9 nodeName:}" failed. No retries permitted until 2023-11-14 14:00:29.830250349 +0000 UTC m=+20.936637236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b9c9a98f-d025-408a-ada2-0c19a356b4b9-config-volume") pod "coredns-5dd5756b68-kvb7v" (UID: "b9c9a98f-d025-408a-ada2-0c19a356b4b9") : object "kube-system"/"coredns" not registered
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.931196    1274 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.931227    1274 projected.go:198] Error preparing data for projected volume kube-api-access-qdhlw for pod default/busybox-5bc68d56bd-wrrkq: object "default"/"kube-root-ca.crt" not registered
	Nov 14 14:00:21 multinode-661456 kubelet[1274]: E1114 14:00:21.931305    1274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/961b5072-649a-4ba6-856b-7d2ceb595b86-kube-api-access-qdhlw podName:961b5072-649a-4ba6-856b-7d2ceb595b86 nodeName:}" failed. No retries permitted until 2023-11-14 14:00:29.931290222 +0000 UTC m=+21.037677108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qdhlw" (UniqueName: "kubernetes.io/projected/961b5072-649a-4ba6-856b-7d2ceb595b86-kube-api-access-qdhlw") pod "busybox-5bc68d56bd-wrrkq" (UID: "961b5072-649a-4ba6-856b-7d2ceb595b86") : object "default"/"kube-root-ca.crt" not registered
	Nov 14 14:00:23 multinode-661456 kubelet[1274]: E1114 14:00:23.297807    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-kvb7v" podUID="b9c9a98f-d025-408a-ada2-0c19a356b4b9"
	Nov 14 14:00:23 multinode-661456 kubelet[1274]: E1114 14:00:23.298837    1274 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-wrrkq" podUID="961b5072-649a-4ba6-856b-7d2ceb595b86"
	Nov 14 14:00:30 multinode-661456 kubelet[1274]: I1114 14:00:30.885838    1274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26d434dab9f2d710054f61a44d5d6230273d5abcb7853a8d50383299ee01340"
	Nov 14 14:00:30 multinode-661456 kubelet[1274]: I1114 14:00:30.894741    1274 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768e3826276801859d29bf8f13dd8494ce85e9b3056d22eea6c3b19f69938cb7"
	Nov 14 14:01:09 multinode-661456 kubelet[1274]: E1114 14:01:09.336134    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 14:01:09 multinode-661456 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 14:01:09 multinode-661456 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 14:01:09 multinode-661456 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 14 14:02:09 multinode-661456 kubelet[1274]: E1114 14:02:09.344619    1274 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 14 14:02:09 multinode-661456 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 14 14:02:09 multinode-661456 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 14 14:02:09 multinode-661456 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-661456 -n multinode-661456
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-661456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (158.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-133714 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-133714 "sudo crictl images -o json": exit status 1 (239.345797ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-133714 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-133714 -n old-k8s-version-133714
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-133714 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-277939                                  | embed-certs-277939           | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-277939                                  | embed-certs-277939           | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-277939                                  | embed-certs-277939           | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	| delete  | -p embed-certs-277939                                  | embed-certs-277939           | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	| start   | -p newest-cni-981589 --memory=2200 --alsologtostderr   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| ssh     | -p no-preload-678256 sudo                              | no-preload-678256            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-678256                                   | no-preload-678256            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-678256                                   | no-preload-678256            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-678256                                   | no-preload-678256            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	| delete  | -p no-preload-678256                                   | no-preload-678256            | jenkins | v1.32.0 | 14 Nov 23 14:32 UTC | 14 Nov 23 14:32 UTC |
	| addons  | enable metrics-server -p newest-cni-981589             | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-981589                                   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-981589                  | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-981589 --memory=2200 --alsologtostderr   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:34 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-817895 | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | default-k8s-diff-port-817895                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-817895 | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | default-k8s-diff-port-817895                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-817895 | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | default-k8s-diff-port-817895                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-817895 | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | default-k8s-diff-port-817895                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-817895 | jenkins | v1.32.0 | 14 Nov 23 14:33 UTC | 14 Nov 23 14:33 UTC |
	|         | default-k8s-diff-port-817895                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-981589 sudo                              | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC | 14 Nov 23 14:34 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-981589                                   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC | 14 Nov 23 14:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-981589                                   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC | 14 Nov 23 14:34 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-981589                                   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC | 14 Nov 23 14:34 UTC |
	| delete  | -p newest-cni-981589                                   | newest-cni-981589            | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC | 14 Nov 23 14:34 UTC |
	| ssh     | -p old-k8s-version-133714 sudo                         | old-k8s-version-133714       | jenkins | v1.32.0 | 14 Nov 23 14:34 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:33:47
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:33:47.713927   61837 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:33:47.714054   61837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:33:47.714062   61837 out.go:309] Setting ErrFile to fd 2...
	I1114 14:33:47.714067   61837 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:33:47.714236   61837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 14:33:47.714785   61837 out.go:303] Setting JSON to false
	I1114 14:33:47.715817   61837 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4578,"bootTime":1699967850,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 14:33:47.715879   61837 start.go:138] virtualization: kvm guest
	I1114 14:33:47.718065   61837 out.go:177] * [newest-cni-981589] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 14:33:47.720030   61837 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:33:47.721515   61837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:33:47.720125   61837 notify.go:220] Checking for updates...
	I1114 14:33:47.722880   61837 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:33:47.724203   61837 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 14:33:47.726178   61837 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 14:33:47.727417   61837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:33:47.728988   61837 config.go:182] Loaded profile config "newest-cni-981589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:33:47.729576   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:33:47.729666   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:33:47.747242   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I1114 14:33:47.747646   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:33:47.748270   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:33:47.748301   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:33:47.748706   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:33:47.748932   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:33:47.749166   61837 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:33:47.749491   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:33:47.749546   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:33:47.765288   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
	I1114 14:33:47.765724   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:33:47.766306   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:33:47.766329   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:33:47.766683   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:33:47.766867   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:33:47.805893   61837 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 14:33:47.807201   61837 start.go:298] selected driver: kvm2
	I1114 14:33:47.807225   61837 start.go:902] validating driver "kvm2" against &{Name:newest-cni-981589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-981589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready
:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:33:47.807384   61837 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:33:47.808206   61837 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:33:47.808295   61837 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17581-6041/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 14:33:47.824753   61837 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 14:33:47.825137   61837 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1114 14:33:47.825210   61837 cni.go:84] Creating CNI manager for ""
	I1114 14:33:47.825238   61837 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1114 14:33:47.825258   61837 start_flags.go:323] config:
	{Name:newest-cni-981589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-981589 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:33:47.825470   61837 iso.go:125] acquiring lock: {Name:mk133084c23ed177adc820fc7d96b1f642fbaa07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:33:47.827086   61837 out.go:177] * Starting control plane node newest-cni-981589 in cluster newest-cni-981589
	I1114 14:33:47.828295   61837 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 14:33:47.828344   61837 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1114 14:33:47.828356   61837 cache.go:56] Caching tarball of preloaded images
	I1114 14:33:47.828433   61837 preload.go:174] Found /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1114 14:33:47.828445   61837 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1114 14:33:47.828595   61837 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/config.json ...
	I1114 14:33:47.828846   61837 start.go:365] acquiring machines lock for newest-cni-981589: {Name:mka8a7be0fef2cfa89eb7b4f7f1c7ded4441f603 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1114 14:33:47.828902   61837 start.go:369] acquired machines lock for "newest-cni-981589" in 30.511µs
	I1114 14:33:47.828920   61837 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:33:47.828926   61837 fix.go:54] fixHost starting: 
	I1114 14:33:47.829278   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:33:47.829320   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:33:47.845670   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I1114 14:33:47.846138   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:33:47.846619   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:33:47.846644   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:33:47.846990   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:33:47.847175   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:33:47.847311   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:33:47.849012   61837 fix.go:102] recreateIfNeeded on newest-cni-981589: state=Stopped err=<nil>
	I1114 14:33:47.849053   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	W1114 14:33:47.849210   61837 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:33:47.851106   61837 out.go:177] * Restarting existing kvm2 VM for "newest-cni-981589" ...
	I1114 14:33:46.694652   58802 system_pods.go:86] 4 kube-system pods found
	I1114 14:33:46.694677   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:33:46.694682   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:33:46.694690   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:33:46.694694   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:33:46.694718   58802 retry.go:31] will retry after 3.584187318s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 14:33:50.287537   58802 system_pods.go:86] 4 kube-system pods found
	I1114 14:33:50.287578   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:33:50.287587   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:33:50.287598   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:33:50.287605   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:33:50.287627   58802 retry.go:31] will retry after 5.056349943s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 14:33:47.852423   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Start
	I1114 14:33:47.852596   61837 main.go:141] libmachine: (newest-cni-981589) Ensuring networks are active...
	I1114 14:33:47.853326   61837 main.go:141] libmachine: (newest-cni-981589) Ensuring network default is active
	I1114 14:33:47.853624   61837 main.go:141] libmachine: (newest-cni-981589) Ensuring network mk-newest-cni-981589 is active
	I1114 14:33:47.854104   61837 main.go:141] libmachine: (newest-cni-981589) Getting domain xml...
	I1114 14:33:47.854788   61837 main.go:141] libmachine: (newest-cni-981589) Creating domain...
	I1114 14:33:49.186888   61837 main.go:141] libmachine: (newest-cni-981589) Waiting to get IP...
	I1114 14:33:49.187797   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:49.188259   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:49.188359   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:49.188235   61899 retry.go:31] will retry after 222.035722ms: waiting for machine to come up
	I1114 14:33:49.411624   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:49.412175   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:49.412221   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:49.412124   61899 retry.go:31] will retry after 293.421301ms: waiting for machine to come up
	I1114 14:33:49.707704   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:49.708224   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:49.708253   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:49.708208   61899 retry.go:31] will retry after 415.866841ms: waiting for machine to come up
	I1114 14:33:50.125936   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:50.126527   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:50.126562   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:50.126484   61899 retry.go:31] will retry after 522.03914ms: waiting for machine to come up
	I1114 14:33:50.649795   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:50.650369   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:50.650399   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:50.650305   61899 retry.go:31] will retry after 727.813093ms: waiting for machine to come up
	I1114 14:33:51.603362   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:51.603947   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:51.603978   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:51.603896   61899 retry.go:31] will retry after 725.403609ms: waiting for machine to come up
	I1114 14:33:52.330880   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:52.331415   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:52.331444   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:52.331390   61899 retry.go:31] will retry after 1.110573799s: waiting for machine to come up
	I1114 14:33:55.350968   58802 system_pods.go:86] 4 kube-system pods found
	I1114 14:33:55.350999   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:33:55.351007   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:33:55.351018   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:33:55.351024   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:33:55.351046   58802 retry.go:31] will retry after 7.10050753s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 14:33:53.443040   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:53.443508   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:53.443545   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:53.443448   61899 retry.go:31] will retry after 1.466392992s: waiting for machine to come up
	I1114 14:33:54.910927   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:54.911411   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:54.911436   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:54.911362   61899 retry.go:31] will retry after 1.193721352s: waiting for machine to come up
	I1114 14:33:56.106709   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:56.107231   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:56.107269   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:56.107179   61899 retry.go:31] will retry after 2.309318133s: waiting for machine to come up
	I1114 14:33:58.417894   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:33:58.418413   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:33:58.418439   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:33:58.418367   61899 retry.go:31] will retry after 2.733264299s: waiting for machine to come up
	I1114 14:34:01.154803   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:01.155466   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:34:01.155499   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:34:01.155409   61899 retry.go:31] will retry after 2.605112202s: waiting for machine to come up
	I1114 14:34:02.456749   58802 system_pods.go:86] 5 kube-system pods found
	I1114 14:34:02.456776   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:34:02.456781   58802 system_pods.go:89] "etcd-old-k8s-version-133714" [df72cbdd-a207-4ca5-9d9a-7ee34e5f4774] Pending
	I1114 14:34:02.456785   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:34:02.456792   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:02.456797   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:34:02.456812   58802 retry.go:31] will retry after 7.581514677s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1114 14:34:03.762399   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:03.762857   61837 main.go:141] libmachine: (newest-cni-981589) DBG | unable to find current IP address of domain newest-cni-981589 in network mk-newest-cni-981589
	I1114 14:34:03.762884   61837 main.go:141] libmachine: (newest-cni-981589) DBG | I1114 14:34:03.762811   61899 retry.go:31] will retry after 3.014372228s: waiting for machine to come up
	I1114 14:34:06.778337   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.778853   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has current primary IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.778883   61837 main.go:141] libmachine: (newest-cni-981589) Found IP for machine: 192.168.39.162
	I1114 14:34:06.778921   61837 main.go:141] libmachine: (newest-cni-981589) Reserving static IP address...
	I1114 14:34:06.779384   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "newest-cni-981589", mac: "52:54:00:52:43:87", ip: "192.168.39.162"} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:06.779404   61837 main.go:141] libmachine: (newest-cni-981589) Reserved static IP address: 192.168.39.162
	I1114 14:34:06.779419   61837 main.go:141] libmachine: (newest-cni-981589) DBG | skip adding static IP to network mk-newest-cni-981589 - found existing host DHCP lease matching {name: "newest-cni-981589", mac: "52:54:00:52:43:87", ip: "192.168.39.162"}
	I1114 14:34:06.779433   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Getting to WaitForSSH function...
	I1114 14:34:06.779446   61837 main.go:141] libmachine: (newest-cni-981589) Waiting for SSH to be available...
	I1114 14:34:06.781625   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.781956   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:06.781988   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.782147   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Using SSH client type: external
	I1114 14:34:06.782176   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Using SSH private key: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa (-rw-------)
	I1114 14:34:06.782216   61837 main.go:141] libmachine: (newest-cni-981589) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1114 14:34:06.782241   61837 main.go:141] libmachine: (newest-cni-981589) DBG | About to run SSH command:
	I1114 14:34:06.782258   61837 main.go:141] libmachine: (newest-cni-981589) DBG | exit 0
	I1114 14:34:06.877623   61837 main.go:141] libmachine: (newest-cni-981589) DBG | SSH cmd err, output: <nil>: 
	I1114 14:34:06.877966   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetConfigRaw
	I1114 14:34:06.878602   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetIP
	I1114 14:34:06.881294   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.881662   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:06.881705   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.881922   61837 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/config.json ...
	I1114 14:34:06.882128   61837 machine.go:88] provisioning docker machine ...
	I1114 14:34:06.882152   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:06.882368   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetMachineName
	I1114 14:34:06.882538   61837 buildroot.go:166] provisioning hostname "newest-cni-981589"
	I1114 14:34:06.882557   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetMachineName
	I1114 14:34:06.882686   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:06.884649   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.884960   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:06.884991   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:06.885100   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:06.885257   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:06.885369   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:06.885501   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:06.885649   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:06.886034   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:06.886053   61837 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-981589 && echo "newest-cni-981589" | sudo tee /etc/hostname
	I1114 14:34:07.026812   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-981589
	
	I1114 14:34:07.026859   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.029563   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.029919   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.029945   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.030073   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:07.030284   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.030453   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.030603   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:07.030761   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:07.031105   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:07.031130   61837 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-981589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-981589/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-981589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:34:07.166639   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:34:07.166669   61837 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17581-6041/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-6041/.minikube}
	I1114 14:34:07.166716   61837 buildroot.go:174] setting up certificates
	I1114 14:34:07.166735   61837 provision.go:83] configureAuth start
	I1114 14:34:07.166751   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetMachineName
	I1114 14:34:07.167064   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetIP
	I1114 14:34:07.169780   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.170182   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.170210   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.170414   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.172517   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.172852   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.172877   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.173025   61837 provision.go:138] copyHostCerts
	I1114 14:34:07.173082   61837 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem, removing ...
	I1114 14:34:07.173102   61837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem
	I1114 14:34:07.173191   61837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/ca.pem (1082 bytes)
	I1114 14:34:07.173302   61837 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem, removing ...
	I1114 14:34:07.173313   61837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem
	I1114 14:34:07.173351   61837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/cert.pem (1123 bytes)
	I1114 14:34:07.173457   61837 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem, removing ...
	I1114 14:34:07.173470   61837 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem
	I1114 14:34:07.173515   61837 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-6041/.minikube/key.pem (1675 bytes)
	I1114 14:34:07.173583   61837 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem org=jenkins.newest-cni-981589 san=[192.168.39.162 192.168.39.162 localhost 127.0.0.1 minikube newest-cni-981589]
	I1114 14:34:07.381102   61837 provision.go:172] copyRemoteCerts
	I1114 14:34:07.381173   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:34:07.381193   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.383681   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.383911   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.383945   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.384168   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:07.384378   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.384509   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:07.384677   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:07.479017   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:34:07.503218   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1114 14:34:07.526929   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 14:34:07.549722   61837 provision.go:86] duration metric: configureAuth took 382.970244ms
	I1114 14:34:07.549756   61837 buildroot.go:189] setting minikube options for container-runtime
	I1114 14:34:07.549987   61837 config.go:182] Loaded profile config "newest-cni-981589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:34:07.550018   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:07.550317   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.553417   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.553923   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.553954   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.554172   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:07.554400   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.554588   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.554775   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:07.554959   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:07.555400   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:07.555414   61837 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1114 14:34:07.691034   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1114 14:34:07.691069   61837 buildroot.go:70] root file system type: tmpfs
	I1114 14:34:07.691169   61837 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1114 14:34:07.691186   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.693785   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.694086   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.694128   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.694285   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:07.694491   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.694641   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.694767   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:07.694920   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:07.695376   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:07.695472   61837 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1114 14:34:10.044917   58802 system_pods.go:86] 6 kube-system pods found
	I1114 14:34:10.044943   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:34:10.044948   58802 system_pods.go:89] "etcd-old-k8s-version-133714" [df72cbdd-a207-4ca5-9d9a-7ee34e5f4774] Running
	I1114 14:34:10.044953   58802 system_pods.go:89] "kube-apiserver-old-k8s-version-133714" [03c9ad10-c07c-41b8-8924-a69b88419baa] Running
	I1114 14:34:10.044957   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:34:10.044964   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:10.044969   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:34:10.044983   58802 retry.go:31] will retry after 9.682513947s: missing components: kube-controller-manager, kube-scheduler
	I1114 14:34:07.842338   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1114 14:34:07.842377   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:07.845014   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.845376   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:07.845406   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:07.845619   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:07.845815   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.845982   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:07.846169   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:07.846397   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:07.846699   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:07.846716   61837 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1114 14:34:08.734148   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1114 14:34:08.734178   61837 machine.go:91] provisioned docker machine in 1.852034539s
	I1114 14:34:08.734189   61837 start.go:300] post-start starting for "newest-cni-981589" (driver="kvm2")
	I1114 14:34:08.734202   61837 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:34:08.734242   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:08.734722   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:34:08.734757   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:08.737416   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:08.737849   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:08.737882   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:08.738060   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:08.738241   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:08.738411   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:08.738574   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:08.833908   61837 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:34:08.838048   61837 info.go:137] Remote host: Buildroot 2021.02.12
	I1114 14:34:08.838073   61837 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/addons for local assets ...
	I1114 14:34:08.838136   61837 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-6041/.minikube/files for local assets ...
	I1114 14:34:08.838251   61837 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem -> 132382.pem in /etc/ssl/certs
	I1114 14:34:08.838380   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:34:08.847270   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:34:08.873459   61837 start.go:303] post-start completed in 139.254408ms
	I1114 14:34:08.873487   61837 fix.go:56] fixHost completed within 21.044560431s
	I1114 14:34:08.873505   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:08.876275   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:08.876647   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:08.876678   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:08.876786   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:08.876989   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:08.877161   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:08.877331   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:08.877535   61837 main.go:141] libmachine: Using SSH client type: native
	I1114 14:34:08.877880   61837 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8089e0] 0x80b6c0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1114 14:34:08.877896   61837 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1114 14:34:09.006364   61837 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699972448.952972268
	
	I1114 14:34:09.006396   61837 fix.go:206] guest clock: 1699972448.952972268
	I1114 14:34:09.006407   61837 fix.go:219] Guest: 2023-11-14 14:34:08.952972268 +0000 UTC Remote: 2023-11-14 14:34:08.873490494 +0000 UTC m=+21.210389185 (delta=79.481774ms)
	I1114 14:34:09.006446   61837 fix.go:190] guest clock delta is within tolerance: 79.481774ms
	I1114 14:34:09.006455   61837 start.go:83] releasing machines lock for "newest-cni-981589", held for 21.177540974s
	I1114 14:34:09.006485   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:09.006763   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetIP
	I1114 14:34:09.009706   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.010123   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:09.010153   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.010371   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:09.010954   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:09.011161   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:09.011301   61837 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:34:09.011337   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:09.011443   61837 ssh_runner.go:195] Run: cat /version.json
	I1114 14:34:09.011470   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:09.013932   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.014229   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:09.014264   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.014288   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.014468   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:09.014658   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:09.014732   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:09.014757   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:09.014820   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:09.014915   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:09.015000   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:09.015071   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:09.015204   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:09.015333   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:09.110914   61837 ssh_runner.go:195] Run: systemctl --version
	I1114 14:34:09.133601   61837 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1114 14:34:09.139162   61837 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1114 14:34:09.139222   61837 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:34:09.155106   61837 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:34:09.155133   61837 start.go:472] detecting cgroup driver to use...
	I1114 14:34:09.155291   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:34:09.172260   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 14:34:09.182483   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 14:34:09.192320   61837 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 14:34:09.192379   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 14:34:09.201996   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:34:09.211683   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 14:34:09.221414   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 14:34:09.231209   61837 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:34:09.241386   61837 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 14:34:09.251254   61837 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:34:09.260172   61837 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:34:09.269014   61837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:34:09.375345   61837 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 14:34:09.392030   61837 start.go:472] detecting cgroup driver to use...
	I1114 14:34:09.392123   61837 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1114 14:34:09.408135   61837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:34:09.421776   61837 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:34:09.451073   61837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:34:09.463729   61837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:34:09.477617   61837 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 14:34:09.505718   61837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 14:34:09.518922   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:34:09.536474   61837 ssh_runner.go:195] Run: which cri-dockerd
	I1114 14:34:09.540444   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1114 14:34:09.548642   61837 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1114 14:34:09.565972   61837 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1114 14:34:09.669806   61837 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1114 14:34:09.789526   61837 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1114 14:34:09.789719   61837 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1114 14:34:09.806642   61837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:34:09.916991   61837 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1114 14:34:11.346412   61837 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.429387449s)
	I1114 14:34:11.346469   61837 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:34:11.456156   61837 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1114 14:34:11.567774   61837 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1114 14:34:11.684829   61837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:34:11.797536   61837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1114 14:34:11.813836   61837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:34:11.924302   61837 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1114 14:34:12.003624   61837 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1114 14:34:12.003685   61837 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1114 14:34:12.009667   61837 start.go:540] Will wait 60s for crictl version
	I1114 14:34:12.009747   61837 ssh_runner.go:195] Run: which crictl
	I1114 14:34:12.014015   61837 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:34:12.069581   61837 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1114 14:34:12.069661   61837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:34:12.095726   61837 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1114 14:34:12.123891   61837 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1114 14:34:12.124007   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetIP
	I1114 14:34:12.126489   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:12.126882   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:12.126907   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:12.127063   61837 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1114 14:34:12.130880   61837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:34:12.144121   61837 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1114 14:34:12.145551   61837 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1114 14:34:12.145626   61837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1114 14:34:12.164185   61837 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1114 14:34:12.164208   61837 docker.go:601] Images already preloaded, skipping extraction
	I1114 14:34:12.164256   61837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1114 14:34:12.183430   61837 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1114 14:34:12.183450   61837 cache_images.go:84] Images are preloaded, skipping loading
	I1114 14:34:12.183510   61837 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1114 14:34:12.209630   61837 cni.go:84] Creating CNI manager for ""
	I1114 14:34:12.209656   61837 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1114 14:34:12.209672   61837 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1114 14:34:12.209687   61837 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-981589 NodeName:newest-cni-981589 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:
map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:34:12.209813   61837 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-981589"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:34:12.209885   61837 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-981589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-981589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:34:12.209936   61837 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:34:12.219591   61837 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:34:12.219656   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 14:34:12.228687   61837 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I1114 14:34:12.244290   61837 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:34:12.259563   61837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1114 14:34:12.275508   61837 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1114 14:34:12.279131   61837 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:34:12.290499   61837 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589 for IP: 192.168.39.162
	I1114 14:34:12.290528   61837 certs.go:190] acquiring lock for shared ca certs: {Name:mkb3fe4539ce9ed96ff0e979200082f9548591da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:34:12.290669   61837 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key
	I1114 14:34:12.290707   61837 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key
	I1114 14:34:12.290779   61837 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/client.key
	I1114 14:34:12.290825   61837 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/apiserver.key.6bed0394
	I1114 14:34:12.290860   61837 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/proxy-client.key
	I1114 14:34:12.290953   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem (1338 bytes)
	W1114 14:34:12.290978   61837 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238_empty.pem, impossibly tiny 0 bytes
	I1114 14:34:12.290990   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca-key.pem (1679 bytes)
	I1114 14:34:12.291012   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:34:12.291048   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:34:12.291072   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/certs/home/jenkins/minikube-integration/17581-6041/.minikube/certs/key.pem (1675 bytes)
	I1114 14:34:12.291111   61837 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem (1708 bytes)
	I1114 14:34:12.291719   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 14:34:12.313963   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 14:34:12.338248   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 14:34:12.361699   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/newest-cni-981589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 14:34:12.385239   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:34:12.407598   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1114 14:34:12.431588   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:34:12.455420   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 14:34:12.478500   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/certs/13238.pem --> /usr/share/ca-certificates/13238.pem (1338 bytes)
	I1114 14:34:12.502560   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/ssl/certs/132382.pem --> /usr/share/ca-certificates/132382.pem (1708 bytes)
	I1114 14:34:12.526311   61837 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-6041/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:34:12.550276   61837 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 14:34:12.568363   61837 ssh_runner.go:195] Run: openssl version
	I1114 14:34:12.574214   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13238.pem && ln -fs /usr/share/ca-certificates/13238.pem /etc/ssl/certs/13238.pem"
	I1114 14:34:12.585693   61837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13238.pem
	I1114 14:34:12.590452   61837 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:40 /usr/share/ca-certificates/13238.pem
	I1114 14:34:12.590500   61837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13238.pem
	I1114 14:34:12.596284   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13238.pem /etc/ssl/certs/51391683.0"
	I1114 14:34:12.607517   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132382.pem && ln -fs /usr/share/ca-certificates/132382.pem /etc/ssl/certs/132382.pem"
	I1114 14:34:12.618229   61837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132382.pem
	I1114 14:34:12.622513   61837 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:40 /usr/share/ca-certificates/132382.pem
	I1114 14:34:12.622573   61837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132382.pem
	I1114 14:34:12.628032   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132382.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:34:12.638279   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:34:12.648266   61837 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:34:12.652948   61837 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:34:12.653032   61837 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:34:12.658433   61837 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:34:12.668310   61837 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:34:12.672525   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1114 14:34:12.678283   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1114 14:34:12.683691   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1114 14:34:12.689230   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1114 14:34:12.694748   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1114 14:34:12.700182   61837 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1114 14:34:12.705914   61837 kubeadm.go:404] StartCluster: {Name:newest-cni-981589 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-981589 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:34:12.706033   61837 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1114 14:34:12.724840   61837 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 14:34:12.734580   61837 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1114 14:34:12.734605   61837 kubeadm.go:636] restartCluster start
	I1114 14:34:12.734661   61837 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1114 14:34:12.743648   61837 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:12.744195   61837 kubeconfig.go:135] verify returned: extract IP: "newest-cni-981589" does not appear in /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:34:12.744443   61837 kubeconfig.go:146] "newest-cni-981589" context is missing from /home/jenkins/minikube-integration/17581-6041/kubeconfig - will repair!
	I1114 14:34:12.744936   61837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-6041/kubeconfig: {Name:mk8c7c760be5355229ff2da52cb7898ad12a909c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:34:12.746456   61837 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1114 14:34:12.756177   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:12.756228   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:12.768013   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:12.768031   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:12.768072   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:12.779537   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:13.280514   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:13.280614   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:13.293858   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:13.780378   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:13.780481   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:13.793113   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:14.279630   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:14.279729   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:14.292557   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:14.780096   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:14.780221   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:14.791945   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:15.280584   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:15.280651   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:15.292698   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:15.780322   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:15.780402   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:15.792679   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:16.280264   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:16.280353   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:16.292738   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:16.780321   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:16.780400   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:16.792751   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:17.280353   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:17.280419   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:17.292477   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:19.734376   58802 system_pods.go:86] 7 kube-system pods found
	I1114 14:34:19.734398   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:34:19.734403   58802 system_pods.go:89] "etcd-old-k8s-version-133714" [df72cbdd-a207-4ca5-9d9a-7ee34e5f4774] Running
	I1114 14:34:19.734408   58802 system_pods.go:89] "kube-apiserver-old-k8s-version-133714" [03c9ad10-c07c-41b8-8924-a69b88419baa] Running
	I1114 14:34:19.734412   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:34:19.734416   58802 system_pods.go:89] "kube-scheduler-old-k8s-version-133714" [87a9539e-5824-47ad-b171-f76f3241ecf6] Running
	I1114 14:34:19.734423   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:19.734433   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:34:19.734450   58802 retry.go:31] will retry after 8.673876995s: missing components: kube-controller-manager
	I1114 14:34:17.780566   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:17.780643   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:17.792437   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:18.280529   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:18.280609   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:18.292141   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:18.779599   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:18.779668   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:18.791408   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:19.279901   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:19.280003   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:19.291712   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:19.780322   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:19.780410   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:19.794267   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:20.279791   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:20.279875   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:20.291814   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:20.780485   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:20.780581   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:20.792423   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:21.279992   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:21.280076   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:21.292405   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:21.779944   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:21.780025   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:21.791943   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:22.280542   61837 api_server.go:166] Checking apiserver status ...
	I1114 14:34:22.280635   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1114 14:34:22.292936   61837 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1114 14:34:22.756530   61837 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1114 14:34:22.756559   61837 kubeadm.go:1128] stopping kube-system containers ...
	I1114 14:34:22.756618   61837 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1114 14:34:22.779541   61837 docker.go:469] Stopping containers: [cbf0eca35050 ddc6e941060e 737cec688698 ae55bac9ab95 feff2a94651f b215f07f66ed c7090e3c87fe ed7736d30d58 af4481b6cfd3 d0c1858b149f d86280608500 92ff3a625b68 b78dfb760301 d225e47424c7 abf24f555e2f]
	I1114 14:34:22.779622   61837 ssh_runner.go:195] Run: docker stop cbf0eca35050 ddc6e941060e 737cec688698 ae55bac9ab95 feff2a94651f b215f07f66ed c7090e3c87fe ed7736d30d58 af4481b6cfd3 d0c1858b149f d86280608500 92ff3a625b68 b78dfb760301 d225e47424c7 abf24f555e2f
	I1114 14:34:22.803030   61837 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1114 14:34:22.819948   61837 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:34:22.829623   61837 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:34:22.829688   61837 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:34:22.839260   61837 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1114 14:34:22.839287   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:22.963640   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:23.675214   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:23.856987   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:23.974800   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:24.062409   61837 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:34:24.062488   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:24.076902   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:24.603349   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:25.103507   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:25.602873   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:26.103136   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:26.141199   61837 api_server.go:72] duration metric: took 2.078787771s to wait for apiserver process to appear ...
	I1114 14:34:26.141225   61837 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:34:26.141241   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:28.415100   58802 system_pods.go:86] 8 kube-system pods found
	I1114 14:34:28.415126   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:34:28.415132   58802 system_pods.go:89] "etcd-old-k8s-version-133714" [df72cbdd-a207-4ca5-9d9a-7ee34e5f4774] Running
	I1114 14:34:28.415137   58802 system_pods.go:89] "kube-apiserver-old-k8s-version-133714" [03c9ad10-c07c-41b8-8924-a69b88419baa] Running
	I1114 14:34:28.415142   58802 system_pods.go:89] "kube-controller-manager-old-k8s-version-133714" [55806e5e-0d60-494e-9880-7d59a20d20ed] Pending
	I1114 14:34:28.415145   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:34:28.415149   58802 system_pods.go:89] "kube-scheduler-old-k8s-version-133714" [87a9539e-5824-47ad-b171-f76f3241ecf6] Running
	I1114 14:34:28.415155   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:28.415159   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:34:28.415176   58802 retry.go:31] will retry after 11.623444742s: missing components: kube-controller-manager
	I1114 14:34:29.458258   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1114 14:34:29.458294   61837 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1114 14:34:29.458309   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:29.552661   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:34:29.552694   61837 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:34:30.053407   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:30.058636   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:34:30.058661   61837 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:34:30.553213   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:30.565140   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1114 14:34:30.565181   61837 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1114 14:34:31.053732   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:31.058817   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1114 14:34:31.066510   61837 api_server.go:141] control plane version: v1.28.3
	I1114 14:34:31.066533   61837 api_server.go:131] duration metric: took 4.925301858s to wait for apiserver health ...
	I1114 14:34:31.066540   61837 cni.go:84] Creating CNI manager for ""
	I1114 14:34:31.066553   61837 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1114 14:34:31.068093   61837 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1114 14:34:31.069368   61837 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1114 14:34:31.079613   61837 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1114 14:34:31.095873   61837 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:34:31.105401   61837 system_pods.go:59] 8 kube-system pods found
	I1114 14:34:31.105449   61837 system_pods.go:61] "coredns-5dd5756b68-54jq8" [610c8429-6191-4225-a3f5-c6892d2bf1f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:34:31.105460   61837 system_pods.go:61] "etcd-newest-cni-981589" [3151806e-0f04-4844-afab-b204b24a0d8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 14:34:31.105470   61837 system_pods.go:61] "kube-apiserver-newest-cni-981589" [5af2a075-bdd5-4991-9cd1-57df73c9577a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 14:34:31.105483   61837 system_pods.go:61] "kube-controller-manager-newest-cni-981589" [58f4c8da-1371-43ac-9988-66fa438fb4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 14:34:31.105488   61837 system_pods.go:61] "kube-proxy-lqqbx" [f3cb36d7-3e92-4262-b6fd-d4231fdf2e40] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1114 14:34:31.105497   61837 system_pods.go:61] "kube-scheduler-newest-cni-981589" [b9c9dc3d-5deb-40dd-b7bc-8dd6da03f446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 14:34:31.105502   61837 system_pods.go:61] "metrics-server-57f55c9bc5-nfrlm" [938be749-76de-4a01-b720-3c61c9e5be7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:31.105508   61837 system_pods.go:61] "storage-provisioner" [d5d6ab5f-b14e-4b71-9781-87e00e6094f9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1114 14:34:31.105515   61837 system_pods.go:74] duration metric: took 9.624091ms to wait for pod list to return data ...
	I1114 14:34:31.105525   61837 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:34:31.109412   61837 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:34:31.109463   61837 node_conditions.go:123] node cpu capacity is 2
	I1114 14:34:31.109482   61837 node_conditions.go:105] duration metric: took 3.947905ms to run NodePressure ...
	I1114 14:34:31.109502   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1114 14:34:31.387396   61837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:34:31.420649   61837 ops.go:34] apiserver oom_adj: -16
	I1114 14:34:31.420669   61837 kubeadm.go:640] restartCluster took 18.686058079s
	I1114 14:34:31.420676   61837 kubeadm.go:406] StartCluster complete in 18.714775804s
	I1114 14:34:31.420690   61837 settings.go:142] acquiring lock: {Name:mk142f790b9a645b9d961649a46a96b1fe4e46d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:34:31.420770   61837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 14:34:31.421693   61837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-6041/kubeconfig: {Name:mk8c7c760be5355229ff2da52cb7898ad12a909c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:34:31.421911   61837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:34:31.422006   61837 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 14:34:31.422085   61837 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-981589"
	I1114 14:34:31.422130   61837 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-981589"
	W1114 14:34:31.422141   61837 addons.go:240] addon storage-provisioner should already be in state true
	I1114 14:34:31.422161   61837 addons.go:69] Setting metrics-server=true in profile "newest-cni-981589"
	I1114 14:34:31.422200   61837 host.go:66] Checking if "newest-cni-981589" exists ...
	I1114 14:34:31.422105   61837 addons.go:69] Setting default-storageclass=true in profile "newest-cni-981589"
	I1114 14:34:31.422231   61837 addons.go:231] Setting addon metrics-server=true in "newest-cni-981589"
	I1114 14:34:31.422258   61837 config.go:182] Loaded profile config "newest-cni-981589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:34:31.422252   61837 addons.go:69] Setting dashboard=true in profile "newest-cni-981589"
	I1114 14:34:31.422290   61837 addons.go:231] Setting addon dashboard=true in "newest-cni-981589"
	W1114 14:34:31.422308   61837 addons.go:240] addon dashboard should already be in state true
	I1114 14:34:31.422327   61837 cache.go:107] acquiring lock: {Name:mk8ba69ebfcdf49d9b35c118b8a3c799ec4a10dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	W1114 14:34:31.422265   61837 addons.go:240] addon metrics-server should already be in state true
	I1114 14:34:31.422408   61837 cache.go:115] /home/jenkins/minikube-integration/17581-6041/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1114 14:34:31.422426   61837 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17581-6041/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 101.915µs
	I1114 14:34:31.422439   61837 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17581-6041/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1114 14:34:31.422447   61837 cache.go:87] Successfully saved all images to host disk.
	I1114 14:34:31.422380   61837 host.go:66] Checking if "newest-cni-981589" exists ...
	I1114 14:34:31.422250   61837 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-981589"
	I1114 14:34:31.422533   61837 host.go:66] Checking if "newest-cni-981589" exists ...
	I1114 14:34:31.422610   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.422642   61837 config.go:182] Loaded profile config "newest-cni-981589": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 14:34:31.422648   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.422875   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.422913   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.422938   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.422978   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.422982   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.422985   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.423015   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.423024   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.437118   61837 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-981589" context rescaled to 1 replicas
	I1114 14:34:31.437170   61837 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1114 14:34:31.440138   61837 out.go:177] * Verifying Kubernetes components...
	I1114 14:34:31.441640   61837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:34:31.442025   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I1114 14:34:31.442025   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34397
	I1114 14:34:31.442483   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I1114 14:34:31.442668   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46831
	I1114 14:34:31.442679   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.442684   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36971
	I1114 14:34:31.442814   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.442838   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.443023   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.443236   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.443254   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.443298   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.443387   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.443405   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.443418   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.443434   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.443554   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.443571   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.443623   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.443737   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.443752   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.443926   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.443959   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.444046   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.444188   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.444200   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.444246   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.444264   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.444280   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.444495   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.444528   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.444788   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.444827   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.447061   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.447124   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.460512   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I1114 14:34:31.460704   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
	I1114 14:34:31.461043   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.461177   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.461751   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.461773   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.461892   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.461912   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.462105   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.462302   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.462591   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.462807   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.464388   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:31.466080   61837 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:34:31.464948   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:31.467375   61837 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:34:31.467389   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 14:34:31.467408   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:31.469495   61837 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1114 14:34:31.469952   61837 addons.go:231] Setting addon default-storageclass=true in "newest-cni-981589"
	I1114 14:34:31.470472   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I1114 14:34:31.470634   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.471583   61837 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1114 14:34:31.472842   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1114 14:34:31.472862   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1114 14:34:31.471365   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:31.472891   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:31.471614   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	W1114 14:34:31.471617   61837 addons.go:240] addon default-storageclass should already be in state true
	I1114 14:34:31.472989   61837 host.go:66] Checking if "newest-cni-981589" exists ...
	I1114 14:34:31.472118   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.472990   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.473204   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:31.473330   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:31.473448   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.473446   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:31.473488   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.474145   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.474165   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.474757   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.474935   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.476082   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.476413   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:31.476446   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.476583   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:31.477246   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:31.477466   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:31.479542   61837 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1114 14:34:31.477783   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:31.480858   61837 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 14:34:31.480871   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 14:34:31.480891   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:31.481112   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:31.484080   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.484443   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:31.484478   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.484628   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:31.484805   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:31.484923   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:31.485026   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:31.487531   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I1114 14:34:31.488034   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.488538   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.488560   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.488904   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.489085   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:31.489295   61837 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1114 14:34:31.489319   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:31.492054   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.492463   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:31.492491   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.492602   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:31.492762   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:31.492911   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:31.493043   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:31.497792   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I1114 14:34:31.498113   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.498546   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.498564   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.498882   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.499323   61837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 14:34:31.499361   61837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 14:34:31.514641   61837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I1114 14:34:31.515104   61837 main.go:141] libmachine: () Calling .GetVersion
	I1114 14:34:31.515633   61837 main.go:141] libmachine: Using API Version  1
	I1114 14:34:31.515654   61837 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 14:34:31.516046   61837 main.go:141] libmachine: () Calling .GetMachineName
	I1114 14:34:31.516260   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetState
	I1114 14:34:31.517831   61837 main.go:141] libmachine: (newest-cni-981589) Calling .DriverName
	I1114 14:34:31.518148   61837 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 14:34:31.518162   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 14:34:31.518179   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHHostname
	I1114 14:34:31.521209   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.521727   61837 main.go:141] libmachine: (newest-cni-981589) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:43:87", ip: ""} in network mk-newest-cni-981589: {Iface:virbr1 ExpiryTime:2023-11-14 15:34:00 +0000 UTC Type:0 Mac:52:54:00:52:43:87 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:newest-cni-981589 Clientid:01:52:54:00:52:43:87}
	I1114 14:34:31.521756   61837 main.go:141] libmachine: (newest-cni-981589) DBG | domain newest-cni-981589 has defined IP address 192.168.39.162 and MAC address 52:54:00:52:43:87 in network mk-newest-cni-981589
	I1114 14:34:31.521916   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHPort
	I1114 14:34:31.522117   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHKeyPath
	I1114 14:34:31.522257   61837 main.go:141] libmachine: (newest-cni-981589) Calling .GetSSHUsername
	I1114 14:34:31.522408   61837 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/newest-cni-981589/id_rsa Username:docker}
	I1114 14:34:31.810381   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1114 14:34:31.810413   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1114 14:34:31.907791   61837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 14:34:31.911698   61837 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 14:34:31.911717   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1114 14:34:31.915088   61837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:34:31.977217   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1114 14:34:31.977247   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1114 14:34:32.018464   61837 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 14:34:32.018488   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 14:34:32.047037   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1114 14:34:32.047058   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1114 14:34:32.080432   61837 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:34:32.080454   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 14:34:32.103610   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1114 14:34:32.103633   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1114 14:34:32.149462   61837 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1114 14:34:32.149482   61837 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:34:32.149594   61837 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1114 14:34:32.149650   61837 cache_images.go:84] Images are preloaded, skipping loading
	I1114 14:34:32.149665   61837 cache_images.go:262] succeeded pushing to: newest-cni-981589
	I1114 14:34:32.149671   61837 cache_images.go:263] failed pushing to: 
	I1114 14:34:32.149693   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:32.149614   61837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:34:32.149722   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:32.150060   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:32.150103   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:32.150117   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:32.150127   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:32.150073   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:32.150387   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:32.150428   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:32.150441   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:32.156971   61837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 14:34:32.275060   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1114 14:34:32.275088   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1114 14:34:32.358345   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1114 14:34:32.358380   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1114 14:34:32.402154   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1114 14:34:32.402187   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1114 14:34:32.448264   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1114 14:34:32.448287   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1114 14:34:32.477099   61837 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1114 14:34:32.477138   61837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1114 14:34:32.530620   61837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1114 14:34:33.232480   61837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.324654224s)
	I1114 14:34:33.232561   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.232580   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.232913   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.232933   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.232946   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.232957   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.233190   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.233226   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:33.233239   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.239985   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.240007   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.240262   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.240285   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.571657   61837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.656533392s)
	I1114 14:34:33.571704   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.571718   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.571740   61837 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.422014967s)
	I1114 14:34:33.571770   61837 api_server.go:72] duration metric: took 2.134568274s to wait for apiserver process to appear ...
	I1114 14:34:33.571780   61837 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:34:33.571795   61837 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1114 14:34:33.572045   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.572062   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.572072   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.572082   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.572332   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:33.572352   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.572368   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.580216   61837 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1114 14:34:33.581343   61837 api_server.go:141] control plane version: v1.28.3
	I1114 14:34:33.581360   61837 api_server.go:131] duration metric: took 9.574525ms to wait for apiserver health ...
	I1114 14:34:33.581367   61837 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:34:33.587865   61837 system_pods.go:59] 8 kube-system pods found
	I1114 14:34:33.587895   61837 system_pods.go:61] "coredns-5dd5756b68-54jq8" [610c8429-6191-4225-a3f5-c6892d2bf1f0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1114 14:34:33.587903   61837 system_pods.go:61] "etcd-newest-cni-981589" [3151806e-0f04-4844-afab-b204b24a0d8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1114 14:34:33.587916   61837 system_pods.go:61] "kube-apiserver-newest-cni-981589" [5af2a075-bdd5-4991-9cd1-57df73c9577a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1114 14:34:33.587927   61837 system_pods.go:61] "kube-controller-manager-newest-cni-981589" [58f4c8da-1371-43ac-9988-66fa438fb4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1114 14:34:33.587935   61837 system_pods.go:61] "kube-proxy-lqqbx" [f3cb36d7-3e92-4262-b6fd-d4231fdf2e40] Running
	I1114 14:34:33.587947   61837 system_pods.go:61] "kube-scheduler-newest-cni-981589" [b9c9dc3d-5deb-40dd-b7bc-8dd6da03f446] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1114 14:34:33.587961   61837 system_pods.go:61] "metrics-server-57f55c9bc5-nfrlm" [938be749-76de-4a01-b720-3c61c9e5be7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:33.587970   61837 system_pods.go:61] "storage-provisioner" [d5d6ab5f-b14e-4b71-9781-87e00e6094f9] Running
	I1114 14:34:33.587981   61837 system_pods.go:74] duration metric: took 6.607402ms to wait for pod list to return data ...
	I1114 14:34:33.587992   61837 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:34:33.590264   61837 default_sa.go:45] found service account: "default"
	I1114 14:34:33.590280   61837 default_sa.go:55] duration metric: took 2.282763ms for default service account to be created ...
	I1114 14:34:33.590288   61837 kubeadm.go:581] duration metric: took 2.153087953s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1114 14:34:33.590305   61837 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:34:33.592705   61837 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:34:33.592723   61837 node_conditions.go:123] node cpu capacity is 2
	I1114 14:34:33.592731   61837 node_conditions.go:105] duration metric: took 2.421321ms to run NodePressure ...
	I1114 14:34:33.592741   61837 start.go:228] waiting for startup goroutines ...
	I1114 14:34:33.629131   61837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.472121781s)
	I1114 14:34:33.629188   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.629202   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.629538   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.629559   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.629571   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:33.629580   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:33.629871   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:33.629868   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:33.629904   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:33.629915   61837 addons.go:467] Verifying addon metrics-server=true in "newest-cni-981589"
	I1114 14:34:34.054962   61837 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.524296816s)
	I1114 14:34:34.055013   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:34.055030   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:34.055280   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:34.055303   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:34.055323   61837 main.go:141] libmachine: Making call to close driver server
	I1114 14:34:34.055335   61837 main.go:141] libmachine: (newest-cni-981589) Calling .Close
	I1114 14:34:34.055597   61837 main.go:141] libmachine: (newest-cni-981589) DBG | Closing plugin on server side
	I1114 14:34:34.055642   61837 main.go:141] libmachine: Successfully made call to close driver server
	I1114 14:34:34.055652   61837 main.go:141] libmachine: Making call to close connection to plugin binary
	I1114 14:34:34.057081   61837 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-981589 addons enable metrics-server	
	
	
	I1114 14:34:34.058360   61837 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1114 14:34:34.059676   61837 addons.go:502] enable addons completed in 2.637678056s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1114 14:34:34.059711   61837 start.go:233] waiting for cluster config update ...
	I1114 14:34:34.059724   61837 start.go:242] writing updated cluster config ...
	I1114 14:34:34.059927   61837 ssh_runner.go:195] Run: rm -f paused
	I1114 14:34:34.107023   61837 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 14:34:34.108582   61837 out.go:177] * Done! kubectl is now configured to use "newest-cni-981589" cluster and "default" namespace by default
	I1114 14:34:40.044787   58802 system_pods.go:86] 8 kube-system pods found
	I1114 14:34:40.044811   58802 system_pods.go:89] "coredns-5644d7b6d9-dss8s" [d70babb7-bb72-4369-b05d-09026c086dde] Running
	I1114 14:34:40.044816   58802 system_pods.go:89] "etcd-old-k8s-version-133714" [df72cbdd-a207-4ca5-9d9a-7ee34e5f4774] Running
	I1114 14:34:40.044820   58802 system_pods.go:89] "kube-apiserver-old-k8s-version-133714" [03c9ad10-c07c-41b8-8924-a69b88419baa] Running
	I1114 14:34:40.044825   58802 system_pods.go:89] "kube-controller-manager-old-k8s-version-133714" [55806e5e-0d60-494e-9880-7d59a20d20ed] Running
	I1114 14:34:40.044829   58802 system_pods.go:89] "kube-proxy-cdd4t" [2ec12192-4ffc-498a-a39a-9efe5a0ea335] Running
	I1114 14:34:40.044833   58802 system_pods.go:89] "kube-scheduler-old-k8s-version-133714" [87a9539e-5824-47ad-b171-f76f3241ecf6] Running
	I1114 14:34:40.044839   58802 system_pods.go:89] "metrics-server-74d5856cc6-gsjk7" [fb8615e8-85c2-466a-8c4c-d0da4fe15502] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 14:34:40.044843   58802 system_pods.go:89] "storage-provisioner" [6878a446-51c1-423d-9150-b28bfd7b21d2] Running
	I1114 14:34:40.044851   58802 system_pods.go:126] duration metric: took 1m7.374786171s to wait for k8s-apps to be running ...
	I1114 14:34:40.044857   58802 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:34:40.044897   58802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:34:40.059440   58802 system_svc.go:56] duration metric: took 14.575779ms WaitForService to wait for kubelet.
	I1114 14:34:40.059462   58802 kubeadm.go:581] duration metric: took 1m14.916612503s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:34:40.059479   58802 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:34:40.062606   58802 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1114 14:34:40.062631   58802 node_conditions.go:123] node cpu capacity is 2
	I1114 14:34:40.062639   58802 node_conditions.go:105] duration metric: took 3.157133ms to run NodePressure ...
	I1114 14:34:40.062650   58802 start.go:228] waiting for startup goroutines ...
	I1114 14:34:40.062656   58802 start.go:233] waiting for cluster config update ...
	I1114 14:34:40.062665   58802 start.go:242] writing updated cluster config ...
	I1114 14:34:40.062907   58802 ssh_runner.go:195] Run: rm -f paused
	I1114 14:34:40.110070   58802 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1114 14:34:40.111852   58802 out.go:177] 
	W1114 14:34:40.113118   58802 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1114 14:34:40.114299   58802 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1114 14:34:40.115785   58802 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-133714" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-11-14 14:27:25 UTC, ends at Tue 2023-11-14 14:34:51 UTC. --
	Nov 14 14:33:47 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:33:47.133103955Z" level=warning msg="cleaning up after shim disconnected" id=c253434a9520fc4318d26af454a5f438b047146e8782b5c6c6692772afd32a81 namespace=moby
	Nov 14 14:33:47 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:33:47.133175770Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 14 14:33:47 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:33:47.134637618Z" level=info msg="ignoring event" container=c253434a9520fc4318d26af454a5f438b047146e8782b5c6c6692772afd32a81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.246009590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.246118055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.246137983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.246147955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.644934693Z" level=info msg="shim disconnected" id=d396e66d0611bffbffd7ff56d933a49da9a65bf9b9a6858edda2097386c7b84b namespace=moby
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.645697962Z" level=warning msg="cleaning up after shim disconnected" id=d396e66d0611bffbffd7ff56d933a49da9a65bf9b9a6858edda2097386c7b84b namespace=moby
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:01.645882842Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 14 14:34:01 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:01.646346148Z" level=info msg="ignoring event" container=d396e66d0611bffbffd7ff56d933a49da9a65bf9b9a6858edda2097386c7b84b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 14 14:34:07 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:07.180387702Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:07 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:07.180442626Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:07 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:07.186466553Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.273330169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.273718940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.273853344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.273884420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:32.735813327Z" level=info msg="ignoring event" container=62e3b22fa2a4706b4b043f823b893806a3fbcc30ace66d66201fca8524d9d092 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.736948914Z" level=info msg="shim disconnected" id=62e3b22fa2a4706b4b043f823b893806a3fbcc30ace66d66201fca8524d9d092 namespace=moby
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.737001437Z" level=warning msg="cleaning up after shim disconnected" id=62e3b22fa2a4706b4b043f823b893806a3fbcc30ace66d66201fca8524d9d092 namespace=moby
	Nov 14 14:34:32 old-k8s-version-133714 dockerd[1218]: time="2023-11-14T14:34:32.737042386Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 14 14:34:50 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:50.193739574Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:50 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:50.194121638Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:50 old-k8s-version-133714 dockerd[1212]: time="2023-11-14T14:34:50.197343218Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	62e3b22fa2a4   a90209bb39e3             "nginx -g 'daemon of…"   19 seconds ago       Exited (1) 18 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard_01004912-de1b-4689-87b5-0c9449640d78_3
	e796cd01deb1   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-hdv95_kubernetes-dashboard_e8d1ce0a-c66b-4d84-864a-dd71c997a2aa_0
	400974259a5f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-hdv95_kubernetes-dashboard_e8d1ce0a-c66b-4d84-864a-dd71c997a2aa_0
	cfbd63d57958   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard_01004912-de1b-4689-87b5-0c9449640d78_0
	914884bf13de   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-gsjk7_kube-system_fb8615e8-85c2-466a-8c4c-d0da4fe15502_0
	70e4eb46e063   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_6878a446-51c1-423d-9150-b28bfd7b21d2_0
	4c2dc7edf075   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_6878a446-51c1-423d-9150-b28bfd7b21d2_0
	b8211de011f9   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-dss8s_kube-system_d70babb7-bb72-4369-b05d-09026c086dde_0
	b87783a33b4e   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-cdd4t_kube-system_2ec12192-4ffc-498a-a39a-9efe5a0ea335_0
	7efd9ecc469a   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-dss8s_kube-system_d70babb7-bb72-4369-b05d-09026c086dde_0
	f34914d8a429   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-cdd4t_kube-system_2ec12192-4ffc-498a-a39a-9efe5a0ea335_0
	10c0229116c2   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-133714_kube-system_72531d07cede24a698c4d67c353e388a_0
	0523769a8c5e   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-133714_kube-system_8237f72467073e0427bca478515bd406_0
	8824df4ee67e   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-133714_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	989634db3c16   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-133714_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	87bdb1de90c4   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-133714_kube-system_72531d07cede24a698c4d67c353e388a_0
	6c710ed97b8e   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-133714_kube-system_8237f72467073e0427bca478515bd406_0
	d84e8397196a   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-133714_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	e5e8158b8959   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-133714_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	time="2023-11-14T14:34:51Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [b8211de011f9] <==
	* .:53
	2023-11-14T14:33:27.342Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-14T14:33:27.342Z [INFO] CoreDNS-1.6.2
	2023-11-14T14:33:27.342Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-14T14:33:57.165Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2023-11-14T14:33:57.234Z [INFO] 127.0.0.1:35899 - 20880 "HINFO IN 6123491314892486609.2602225267000393585. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.068004849s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-133714
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-133714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=old-k8s-version-133714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T14_33_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:33:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:34:06 +0000   Tue, 14 Nov 2023 14:33:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:34:06 +0000   Tue, 14 Nov 2023 14:33:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:34:06 +0000   Tue, 14 Nov 2023 14:33:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:34:06 +0000   Tue, 14 Nov 2023 14:33:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.57
	  Hostname:    old-k8s-version-133714
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 ec831066d7aa46dda4110e26597bad52
	 System UUID:                ec831066-d7aa-46dd-a411-0e26597bad52
	 Boot ID:                    6ff17c81-58f7-4676-b940-1c112ef5aa15
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.7
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-dss8s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-133714                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                kube-apiserver-old-k8s-version-133714             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                kube-controller-manager-old-k8s-version-133714    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                kube-proxy-cdd4t                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                kube-scheduler-old-k8s-version-133714             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                metrics-server-74d5856cc6-gsjk7                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-bx8jc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-hdv95             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet, old-k8s-version-133714     Node old-k8s-version-133714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet, old-k8s-version-133714     Node old-k8s-version-133714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x7 over 113s)  kubelet, old-k8s-version-133714     Node old-k8s-version-133714 status is now: NodeHasSufficientPID
	  Normal  Starting                 85s                  kube-proxy, old-k8s-version-133714  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.082376] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.452029] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.588237] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.144099] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.733899] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.694160] systemd-fstab-generator[510]: Ignoring "noauto" for root device
	[  +0.111271] systemd-fstab-generator[528]: Ignoring "noauto" for root device
	[  +1.321623] systemd-fstab-generator[799]: Ignoring "noauto" for root device
	[  +0.378877] systemd-fstab-generator[837]: Ignoring "noauto" for root device
	[  +0.165199] systemd-fstab-generator[848]: Ignoring "noauto" for root device
	[  +0.174707] systemd-fstab-generator[861]: Ignoring "noauto" for root device
	[  +6.630234] systemd-fstab-generator[1183]: Ignoring "noauto" for root device
	[  +2.476811] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.164024] systemd-fstab-generator[1660]: Ignoring "noauto" for root device
	[  +0.557573] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.212765] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov14 14:28] kauditd_printk_skb: 5 callbacks suppressed
	[Nov14 14:32] hrtimer: interrupt took 4298601 ns
	[ +37.049228] systemd-fstab-generator[7012]: Ignoring "noauto" for root device
	[Nov14 14:33] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [0523769a8c5e] <==
	* 2023-11-14 14:33:01.256847 I | etcdserver: starting member 9e74a4e67a7986c4 in cluster ea4bebb141add8b3
	2023-11-14 14:33:01.262279 I | raft: 9e74a4e67a7986c4 became follower at term 0
	2023-11-14 14:33:01.264267 I | raft: newRaft 9e74a4e67a7986c4 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-14 14:33:01.270256 I | raft: 9e74a4e67a7986c4 became follower at term 1
	2023-11-14 14:33:01.322134 W | auth: simple token is not cryptographically signed
	2023-11-14 14:33:01.328785 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-14 14:33:01.329285 I | etcdserver: 9e74a4e67a7986c4 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-14 14:33:01.330165 I | etcdserver/membership: added member 9e74a4e67a7986c4 [https://192.168.61.57:2380] to cluster ea4bebb141add8b3
	2023-11-14 14:33:01.336012 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 14:33:01.337135 I | embed: listening for metrics on http://192.168.61.57:2381
	2023-11-14 14:33:01.337818 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-14 14:33:02.275786 I | raft: 9e74a4e67a7986c4 is starting a new election at term 1
	2023-11-14 14:33:02.275911 I | raft: 9e74a4e67a7986c4 became candidate at term 2
	2023-11-14 14:33:02.276140 I | raft: 9e74a4e67a7986c4 received MsgVoteResp from 9e74a4e67a7986c4 at term 2
	2023-11-14 14:33:02.276307 I | raft: 9e74a4e67a7986c4 became leader at term 2
	2023-11-14 14:33:02.276330 I | raft: raft.node: 9e74a4e67a7986c4 elected leader 9e74a4e67a7986c4 at term 2
	2023-11-14 14:33:02.276811 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-14 14:33:02.278722 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-14 14:33:02.278807 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-14 14:33:02.278833 I | etcdserver: published {Name:old-k8s-version-133714 ClientURLs:[https://192.168.61.57:2379]} to cluster ea4bebb141add8b3
	2023-11-14 14:33:02.278837 I | embed: ready to serve client requests
	2023-11-14 14:33:02.280140 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 14:33:02.280325 I | embed: ready to serve client requests
	2023-11-14 14:33:02.281492 I | embed: serving client requests on 192.168.61.57:2379
	2023-11-14 14:33:37.690294 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7877" took too long (283.035168ms) to execute
	
	* 
	* ==> kernel <==
	*  14:34:51 up 7 min,  0 users,  load average: 0.67, 0.60, 0.29
	Linux old-k8s-version-133714 5.10.57 #1 SMP Sat Nov 11 01:15:44 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [10c0229116c2] <==
	* I1114 14:33:06.496158       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1114 14:33:06.508350       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1114 14:33:06.508469       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1114 14:33:08.269489       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 14:33:08.548784       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1114 14:33:08.828447       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.61.57]
	I1114 14:33:08.829338       1 controller.go:606] quota admission added evaluator for: endpoints
	I1114 14:33:08.916379       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 14:33:09.754025       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1114 14:33:10.039551       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1114 14:33:10.347763       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1114 14:33:25.224570       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1114 14:33:25.272892       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1114 14:33:25.411931       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	E1114 14:33:28.893516       1 available_controller.go:416] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I1114 14:33:30.138891       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 14:33:30.139149       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 14:33:30.139417       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 14:33:30.139632       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1114 14:34:30.140554       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1114 14:34:30.140659       1 handler_proxy.go:99] no RequestInfo found in the context
	E1114 14:34:30.140755       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1114 14:34:30.140769       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [989634db3c16] <==
	* E1114 14:33:28.483091       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.483554       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"4c1164fe-c3c0-4290-9941-f15ed428ea9a", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.483583       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.535762       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.541767       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.542029       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.606411       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.606918       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"4c1164fe-c3c0-4290-9941-f15ed428ea9a", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.607795       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.608890       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.678088       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.678989       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.679304       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"4c1164fe-c3c0-4290-9941-f15ed428ea9a", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.679680       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.814489       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.815445       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1114 14:33:28.815688       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:28.815738       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"4c1164fe-c3c0-4290-9941-f15ed428ea9a", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 14:33:29.081171       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"5d4d7352-ddad-4b50-a1fd-01115dc98027", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-gsjk7
	I1114 14:33:29.889963       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"95f23b71-17e1-4fc9-a50f-442ed5256cfe", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-bx8jc
	I1114 14:33:29.921517       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"4c1164fe-c3c0-4290-9941-f15ed428ea9a", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-hdv95
	E1114 14:33:55.624704       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 14:33:57.474812       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1114 14:34:25.877428       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1114 14:34:29.477040       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b87783a33b4e] <==
	* W1114 14:33:26.854547       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1114 14:33:26.878502       1 node.go:135] Successfully retrieved node IP: 192.168.61.57
	I1114 14:33:26.878545       1 server_others.go:149] Using iptables Proxier.
	I1114 14:33:26.879505       1 server.go:529] Version: v1.16.0
	I1114 14:33:26.902984       1 config.go:313] Starting service config controller
	I1114 14:33:26.903015       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1114 14:33:26.903169       1 config.go:131] Starting endpoints config controller
	I1114 14:33:26.903184       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1114 14:33:27.003845       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1114 14:33:27.003903       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8824df4ee67e] <==
	* W1114 14:33:05.571509       1 authentication.go:79] Authentication is disabled
	I1114 14:33:05.571661       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1114 14:33:05.573895       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1114 14:33:05.638912       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:33:05.639541       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:33:05.639936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:33:05.640545       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 14:33:05.645808       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:33:05.646083       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:33:05.646140       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 14:33:05.646263       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:33:05.649179       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:33:05.649619       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:33:05.650161       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 14:33:06.641953       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:33:06.643184       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:33:06.652083       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:33:06.655650       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 14:33:06.655739       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 14:33:06.658924       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:33:06.660105       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 14:33:06.660577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 14:33:06.670153       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:33:06.678152       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 14:33:06.679908       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-14 14:27:25 UTC, ends at Tue 2023-11-14 14:34:51 UTC. --
	Nov 14 14:33:47 old-k8s-version-133714 kubelet[7030]: E1114 14:33:47.591182    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:33:48 old-k8s-version-133714 kubelet[7030]: W1114 14:33:48.589699    7030 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bx8jc through plugin: invalid network status for
	Nov 14 14:33:48 old-k8s-version-133714 kubelet[7030]: E1114 14:33:48.604141    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:33:49 old-k8s-version-133714 kubelet[7030]: E1114 14:33:49.613823    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:33:56 old-k8s-version-133714 kubelet[7030]: E1114 14:33:56.144008    7030 pod_workers.go:191] Error syncing pod fb8615e8-85c2-466a-8c4c-d0da4fe15502 ("metrics-server-74d5856cc6-gsjk7_kube-system(fb8615e8-85c2-466a-8c4c-d0da4fe15502)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 14:34:01 old-k8s-version-133714 kubelet[7030]: W1114 14:34:01.703116    7030 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bx8jc through plugin: invalid network status for
	Nov 14 14:34:01 old-k8s-version-133714 kubelet[7030]: E1114 14:34:01.709736    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:34:02 old-k8s-version-133714 kubelet[7030]: W1114 14:34:02.720764    7030 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bx8jc through plugin: invalid network status for
	Nov 14 14:34:07 old-k8s-version-133714 kubelet[7030]: E1114 14:34:07.186976    7030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:07 old-k8s-version-133714 kubelet[7030]: E1114 14:34:07.187100    7030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:07 old-k8s-version-133714 kubelet[7030]: E1114 14:34:07.187153    7030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:07 old-k8s-version-133714 kubelet[7030]: E1114 14:34:07.187183    7030 pod_workers.go:191] Error syncing pod fb8615e8-85c2-466a-8c4c-d0da4fe15502 ("metrics-server-74d5856cc6-gsjk7_kube-system(fb8615e8-85c2-466a-8c4c-d0da4fe15502)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:08 old-k8s-version-133714 kubelet[7030]: E1114 14:34:08.026759    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:34:21 old-k8s-version-133714 kubelet[7030]: E1114 14:34:21.135806    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:34:22 old-k8s-version-133714 kubelet[7030]: E1114 14:34:22.140099    7030 pod_workers.go:191] Error syncing pod fb8615e8-85c2-466a-8c4c-d0da4fe15502 ("metrics-server-74d5856cc6-gsjk7_kube-system(fb8615e8-85c2-466a-8c4c-d0da4fe15502)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 14:34:32 old-k8s-version-133714 kubelet[7030]: W1114 14:34:32.956628    7030 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bx8jc through plugin: invalid network status for
	Nov 14 14:34:32 old-k8s-version-133714 kubelet[7030]: E1114 14:34:32.964104    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:34:33 old-k8s-version-133714 kubelet[7030]: W1114 14:34:33.972919    7030 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bx8jc through plugin: invalid network status for
	Nov 14 14:34:36 old-k8s-version-133714 kubelet[7030]: E1114 14:34:36.149071    7030 pod_workers.go:191] Error syncing pod fb8615e8-85c2-466a-8c4c-d0da4fe15502 ("metrics-server-74d5856cc6-gsjk7_kube-system(fb8615e8-85c2-466a-8c4c-d0da4fe15502)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 14 14:34:38 old-k8s-version-133714 kubelet[7030]: E1114 14:34:38.026791    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	Nov 14 14:34:50 old-k8s-version-133714 kubelet[7030]: E1114 14:34:50.197984    7030 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:50 old-k8s-version-133714 kubelet[7030]: E1114 14:34:50.198053    7030 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:50 old-k8s-version-133714 kubelet[7030]: E1114 14:34:50.198099    7030 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 14 14:34:50 old-k8s-version-133714 kubelet[7030]: E1114 14:34:50.198135    7030 pod_workers.go:191] Error syncing pod fb8615e8-85c2-466a-8c4c-d0da4fe15502 ("metrics-server-74d5856cc6-gsjk7_kube-system(fb8615e8-85c2-466a-8c4c-d0da4fe15502)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 14 14:34:51 old-k8s-version-133714 kubelet[7030]: E1114 14:34:51.136003    7030 pod_workers.go:191] Error syncing pod 01004912-de1b-4689-87b5-0c9449640d78 ("dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bx8jc_kubernetes-dashboard(01004912-de1b-4689-87b5-0c9449640d78)"
	
	* 
	* ==> kubernetes-dashboard [e796cd01deb1] <==
	* 2023/11/14 14:33:40 Using namespace: kubernetes-dashboard
	2023/11/14 14:33:40 Using in-cluster config to connect to apiserver
	2023/11/14 14:33:40 Using secret token for csrf signing
	2023/11/14 14:33:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/14 14:33:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/14 14:33:40 Successful initial request to the apiserver, version: v1.16.0
	2023/11/14 14:33:40 Generating JWE encryption key
	2023/11/14 14:33:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/14 14:33:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/14 14:33:40 Initializing JWE encryption key from synchronized object
	2023/11/14 14:33:40 Creating in-cluster Sidecar client
	2023/11/14 14:33:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/14 14:33:40 Serving insecurely on HTTP port: 9090
	2023/11/14 14:34:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/14 14:34:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/14 14:33:40 Starting overwatch
	
	* 
	* ==> storage-provisioner [70e4eb46e063] <==
	* I1114 14:33:28.972981       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 14:33:29.128409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 14:33:29.132743       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 14:33:29.227298       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 14:33:29.229661       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-133714_95b6f28e-9ac6-46af-ae86-32f14be2c5a2!
	I1114 14:33:29.258510       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90b84012-732b-4dbd-ab1a-5514e4873421", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-133714_95b6f28e-9ac6-46af-ae86-32f14be2c5a2 became leader
	I1114 14:33:29.335996       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-133714_95b6f28e-9ac6-46af-ae86-32f14be2c5a2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-133714 -n old-k8s-version-133714
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-133714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-gsjk7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-133714 describe pod metrics-server-74d5856cc6-gsjk7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-133714 describe pod metrics-server-74d5856cc6-gsjk7: exit status 1 (56.906258ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-gsjk7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-133714 describe pod metrics-server-74d5856cc6-gsjk7: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.92s)

                                                
                                    

Test pass (285/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.29
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 4.44
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.57
20 TestOffline 112.1
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 220.47
27 TestAddons/parallel/Registry 17.1
28 TestAddons/parallel/Ingress 24.57
29 TestAddons/parallel/InspektorGadget 11.08
30 TestAddons/parallel/MetricsServer 6.27
31 TestAddons/parallel/HelmTiller 15.4
33 TestAddons/parallel/CSI 67.73
34 TestAddons/parallel/Headlamp 16.17
35 TestAddons/parallel/CloudSpanner 5.67
36 TestAddons/parallel/LocalPath 64.42
37 TestAddons/parallel/NvidiaDevicePlugin 5.64
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 13.4
42 TestCertOptions 87.63
43 TestCertExpiration 293.26
44 TestDockerFlags 80.85
45 TestForceSystemdFlag 90.07
46 TestForceSystemdEnv 63.15
48 TestKVMDriverInstallOrUpdate 3.71
52 TestErrorSpam/setup 48.48
53 TestErrorSpam/start 0.37
54 TestErrorSpam/status 0.78
55 TestErrorSpam/pause 1.25
56 TestErrorSpam/unpause 1.4
57 TestErrorSpam/stop 12.6
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 102.57
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 37.12
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.49
69 TestFunctional/serial/CacheCmd/cache/add_local 1.3
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 42.92
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.13
80 TestFunctional/serial/LogsFileCmd 1.15
81 TestFunctional/serial/InvalidService 4.36
83 TestFunctional/parallel/ConfigCmd 0.46
84 TestFunctional/parallel/DashboardCmd 36.98
85 TestFunctional/parallel/DryRun 0.34
86 TestFunctional/parallel/InternationalLanguage 0.17
87 TestFunctional/parallel/StatusCmd 1.15
91 TestFunctional/parallel/ServiceCmdConnect 9.73
92 TestFunctional/parallel/AddonsCmd 0.17
93 TestFunctional/parallel/PersistentVolumeClaim 55.05
95 TestFunctional/parallel/SSHCmd 0.58
96 TestFunctional/parallel/CpCmd 1.1
97 TestFunctional/parallel/MySQL 36.14
98 TestFunctional/parallel/FileSync 0.21
99 TestFunctional/parallel/CertSync 1.45
103 TestFunctional/parallel/NodeLabels 0.06
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
107 TestFunctional/parallel/License 0.18
108 TestFunctional/parallel/ServiceCmd/DeployApp 12.28
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
110 TestFunctional/parallel/ProfileCmd/profile_list 0.36
111 TestFunctional/parallel/MountCmd/any-port 9.99
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
122 TestFunctional/parallel/Version/short 0.17
123 TestFunctional/parallel/Version/components 0.83
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
128 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
129 TestFunctional/parallel/ImageCommands/Setup 1.46
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.4
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.25
133 TestFunctional/parallel/MountCmd/specific-port 1.89
134 TestFunctional/parallel/ServiceCmd/List 0.3
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
138 TestFunctional/parallel/ServiceCmd/Format 0.4
139 TestFunctional/parallel/ServiceCmd/URL 0.34
140 TestFunctional/parallel/DockerEnv/bash 1.34
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.22
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.93
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.04
148 TestFunctional/delete_addon-resizer_images 0.08
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 342.62
154 TestImageBuild/serial/Setup 52.06
155 TestImageBuild/serial/NormalBuild 1.61
156 TestImageBuild/serial/BuildWithBuildArg 1.3
157 TestImageBuild/serial/BuildWithDockerIgnore 0.4
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.28
161 TestIngressAddonLegacy/StartLegacyK8sCluster 72.81
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.48
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 41.64
168 TestJSONOutput/start/Command 67.98
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.74
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.59
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 8.1
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.21
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 104.1
200 TestMountStart/serial/StartWithMountFirst 29.22
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 28.9
203 TestMountStart/serial/VerifyMountSecond 0.42
204 TestMountStart/serial/DeleteFirst 0.67
205 TestMountStart/serial/VerifyMountPostDelete 0.42
206 TestMountStart/serial/Stop 2.25
207 TestMountStart/serial/RestartStopped 24.83
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 138.92
212 TestMultiNode/serial/DeployApp2Nodes 5.27
213 TestMultiNode/serial/PingHostFrom2Pods 0.95
214 TestMultiNode/serial/AddNode 51.2
215 TestMultiNode/serial/ProfileList 0.24
216 TestMultiNode/serial/CopyFile 8.13
217 TestMultiNode/serial/StopNode 4.05
218 TestMultiNode/serial/StartAfterStop 32.38
223 TestMultiNode/serial/ValidateNameConflict 53.85
228 TestPreload 181.94
230 TestScheduledStopUnix 123.68
231 TestSkaffold 143.04
234 TestRunningBinaryUpgrade 203.13
236 TestKubernetesUpgrade 227.95
249 TestStoppedBinaryUpgrade/Setup 0.52
250 TestStoppedBinaryUpgrade/Upgrade 260.52
252 TestPause/serial/Start 75.24
260 TestPause/serial/SecondStartNoReconfiguration 65.61
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 69.32
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.64
265 TestNetworkPlugins/group/auto/Start 112.3
266 TestPause/serial/Pause 0.9
267 TestPause/serial/VerifyStatus 0.31
268 TestPause/serial/Unpause 0.67
269 TestPause/serial/PauseAgain 0.88
270 TestPause/serial/DeletePaused 1.13
271 TestPause/serial/VerifyDeletedResources 13.95
272 TestNetworkPlugins/group/kindnet/Start 89.7
273 TestNoKubernetes/serial/StartWithStopK8s 39.05
274 TestNetworkPlugins/group/calico/Start 108.71
275 TestNoKubernetes/serial/Start 51.66
276 TestNetworkPlugins/group/auto/KubeletFlags 0.25
277 TestNetworkPlugins/group/auto/NetCatPod 13.65
278 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
279 TestNetworkPlugins/group/auto/DNS 0.2
280 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
281 TestNetworkPlugins/group/auto/Localhost 0.19
282 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
283 TestNetworkPlugins/group/auto/HairPin 0.18
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
285 TestNoKubernetes/serial/ProfileList 1.61
286 TestNoKubernetes/serial/Stop 2.27
287 TestNoKubernetes/serial/StartNoArgs 28.41
288 TestNetworkPlugins/group/kindnet/DNS 0.24
289 TestNetworkPlugins/group/kindnet/Localhost 0.22
290 TestNetworkPlugins/group/kindnet/HairPin 0.18
291 TestNetworkPlugins/group/custom-flannel/Start 91.77
292 TestNetworkPlugins/group/false/Start 100.49
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
294 TestNetworkPlugins/group/enable-default-cni/Start 124.77
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.21
297 TestNetworkPlugins/group/calico/NetCatPod 10.35
298 TestNetworkPlugins/group/calico/DNS 0.27
299 TestNetworkPlugins/group/calico/Localhost 0.24
300 TestNetworkPlugins/group/calico/HairPin 0.26
301 TestNetworkPlugins/group/flannel/Start 116.02
302 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
303 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.37
304 TestNetworkPlugins/group/custom-flannel/DNS 0.2
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
307 TestNetworkPlugins/group/false/KubeletFlags 0.25
308 TestNetworkPlugins/group/false/NetCatPod 13.43
309 TestNetworkPlugins/group/bridge/Start 91.16
310 TestNetworkPlugins/group/false/DNS 0.25
311 TestNetworkPlugins/group/false/Localhost 0.2
312 TestNetworkPlugins/group/false/HairPin 0.18
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.51
315 TestNetworkPlugins/group/kubenet/Start 127.79
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
320 TestStartStop/group/old-k8s-version/serial/FirstStart 167.09
321 TestNetworkPlugins/group/flannel/ControllerPod 5.03
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
323 TestNetworkPlugins/group/flannel/NetCatPod 11.36
324 TestNetworkPlugins/group/flannel/DNS 0.24
325 TestNetworkPlugins/group/flannel/Localhost 0.24
326 TestNetworkPlugins/group/flannel/HairPin 0.24
328 TestStartStop/group/no-preload/serial/FirstStart 98.16
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
330 TestNetworkPlugins/group/bridge/NetCatPod 12.34
331 TestNetworkPlugins/group/bridge/DNS 0.18
332 TestNetworkPlugins/group/bridge/Localhost 0.17
333 TestNetworkPlugins/group/bridge/HairPin 0.18
335 TestStartStop/group/embed-certs/serial/FirstStart 83.04
336 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
337 TestNetworkPlugins/group/kubenet/NetCatPod 12.35
338 TestNetworkPlugins/group/kubenet/DNS 0.25
339 TestNetworkPlugins/group/kubenet/Localhost 0.19
340 TestNetworkPlugins/group/kubenet/HairPin 0.18
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.3
343 TestStartStop/group/no-preload/serial/DeployApp 9.43
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
345 TestStartStop/group/no-preload/serial/Stop 13.14
346 TestStartStop/group/embed-certs/serial/DeployApp 10.46
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
348 TestStartStop/group/no-preload/serial/SecondStart 336.48
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
350 TestStartStop/group/old-k8s-version/serial/DeployApp 8.5
351 TestStartStop/group/embed-certs/serial/Stop 13.15
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
353 TestStartStop/group/old-k8s-version/serial/Stop 13.17
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
355 TestStartStop/group/embed-certs/serial/SecondStart 320.01
356 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
357 TestStartStop/group/old-k8s-version/serial/SecondStart 465
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.54
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.37
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.15
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 350.31
363 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.02
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
365 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
366 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
367 TestStartStop/group/embed-certs/serial/Pause 2.9
368 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/newest-cni/serial/FirstStart 77
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
372 TestStartStop/group/no-preload/serial/Pause 2.79
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 22.02
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
376 TestStartStop/group/newest-cni/serial/Stop 8.14
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
379 TestStartStop/group/newest-cni/serial/SecondStart 46.76
380 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
381 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.68
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
385 TestStartStop/group/newest-cni/serial/Pause 2.37
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/old-k8s-version/serial/Pause 2.37
x
+
TestDownloadOnly/v1.16.0/json-events (10.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783395 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783395 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (10.286691631s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783395
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783395: exit status 85 (70.822204ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-783395 | jenkins | v1.32.0 | 14 Nov 23 13:33 UTC |          |
	|         | -p download-only-783395        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:33:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:33:54.508093   13250 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:33:54.508248   13250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:33:54.508257   13250 out.go:309] Setting ErrFile to fd 2...
	I1114 13:33:54.508261   13250 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:33:54.508427   13250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	W1114 13:33:54.508540   13250 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-6041/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-6041/.minikube/config/config.json: no such file or directory
	I1114 13:33:54.509110   13250 out.go:303] Setting JSON to true
	I1114 13:33:54.509954   13250 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":985,"bootTime":1699967850,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:33:54.510019   13250 start.go:138] virtualization: kvm guest
	I1114 13:33:54.512429   13250 out.go:97] [download-only-783395] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 13:33:54.513712   13250 out.go:169] MINIKUBE_LOCATION=17581
	W1114 13:33:54.512546   13250 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball: no such file or directory
	I1114 13:33:54.512614   13250 notify.go:220] Checking for updates...
	I1114 13:33:54.516276   13250 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:33:54.517595   13250 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:33:54.518862   13250 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:33:54.520120   13250 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1114 13:33:54.522327   13250 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 13:33:54.522642   13250 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:33:54.620396   13250 out.go:97] Using the kvm2 driver based on user configuration
	I1114 13:33:54.620425   13250 start.go:298] selected driver: kvm2
	I1114 13:33:54.620431   13250 start.go:902] validating driver "kvm2" against <nil>
	I1114 13:33:54.620730   13250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:33:54.620856   13250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17581-6041/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1114 13:33:54.635521   13250 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1114 13:33:54.635571   13250 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:33:54.636029   13250 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1114 13:33:54.636197   13250 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1114 13:33:54.636261   13250 cni.go:84] Creating CNI manager for ""
	I1114 13:33:54.636281   13250 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1114 13:33:54.636293   13250 start_flags.go:323] config:
	{Name:download-only-783395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-783395 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:33:54.636504   13250 iso.go:125] acquiring lock: {Name:mk133084c23ed177adc820fc7d96b1f642fbaa07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:33:54.638229   13250 out.go:97] Downloading VM boot image ...
	I1114 13:33:54.638259   13250 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17581-6041/.minikube/cache/iso/amd64/minikube-v1.32.1-1699648094-17581-amd64.iso
	I1114 13:33:58.118642   13250 out.go:97] Starting control plane node download-only-783395 in cluster download-only-783395
	I1114 13:33:58.118701   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1114 13:33:58.146636   13250 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1114 13:33:58.146685   13250 cache.go:56] Caching tarball of preloaded images
	I1114 13:33:58.146857   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1114 13:33:58.148475   13250 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1114 13:33:58.148497   13250 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1114 13:33:58.178077   13250 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1114 13:34:01.117093   13250 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1114 13:34:01.117201   13250 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17581-6041/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1114 13:34:01.881953   13250 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1114 13:34:01.882313   13250 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/download-only-783395/config.json ...
	I1114 13:34:01.882343   13250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/download-only-783395/config.json: {Name:mk19fead5d349b671149d8dadc8d43faf68750f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:01.882489   13250 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1114 13:34:01.882653   13250 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17581-6041/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-783395"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (4.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783395 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783395 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (4.43604895s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (4.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783395
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783395: exit status 85 (70.869018ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-783395 | jenkins | v1.32.0 | 14 Nov 23 13:33 UTC |          |
	|         | -p download-only-783395        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-783395 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |          |
	|         | -p download-only-783395        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:04.873185   13307 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:04.873426   13307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:04.873448   13307 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:04.873453   13307 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:04.873640   13307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	W1114 13:34:04.873737   13307 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-6041/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-6041/.minikube/config/config.json: no such file or directory
	I1114 13:34:04.874127   13307 out.go:303] Setting JSON to true
	I1114 13:34:04.874904   13307 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":995,"bootTime":1699967850,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:34:04.874965   13307 start.go:138] virtualization: kvm guest
	I1114 13:34:04.876958   13307 out.go:97] [download-only-783395] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 13:34:04.878380   13307 out.go:169] MINIKUBE_LOCATION=17581
	I1114 13:34:04.877102   13307 notify.go:220] Checking for updates...
	I1114 13:34:04.880696   13307 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:04.881960   13307 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:34:04.883248   13307 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:34:04.884408   13307 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-783395"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-783395
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-552460 --alsologtostderr --binary-mirror http://127.0.0.1:44135 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-552460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-552460
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (112.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-135189 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-135189 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m50.671118415s)
helpers_test.go:175: Cleaning up "offline-docker-135189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-135189
E1114 14:12:28.728462   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-135189: (1.426873611s)
--- PASS: TestOffline (112.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-017503
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-017503: exit status 85 (62.328895ms)

                                                
                                                
-- stdout --
	* Profile "addons-017503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-017503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-017503
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-017503: exit status 85 (62.599896ms)

                                                
                                                
-- stdout --
	* Profile "addons-017503" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-017503"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (220.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-017503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-017503 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m40.468262641s)
--- PASS: TestAddons/Setup (220.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 15.691588ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qjtmk" [5d2cabbf-5f85-4104-a066-acafa07cb760] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02996696s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6m9lf" [ae7598ea-80b2-4cfd-a1c4-5fd4d4b3dab3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.029851451s
addons_test.go:339: (dbg) Run:  kubectl --context addons-017503 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-017503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-017503 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.189287404s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 ip
2023/11/14 13:38:07 [DEBUG] GET http://192.168.39.41:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-017503 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context addons-017503 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.627262289s)
addons_test.go:231: (dbg) Run:  kubectl --context addons-017503 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-017503 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a01c7074-4762-4b51-91f6-433b1e8d5804] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a01c7074-4762-4b51-91f6-433b1e8d5804] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.01564984s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-017503 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.41
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-017503 addons disable ingress-dns --alsologtostderr -v=1: (2.049389118s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-017503 addons disable ingress --alsologtostderr -v=1: (7.918396525s)
--- PASS: TestAddons/parallel/Ingress (24.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ljwgw" [b33a9857-9b5e-41a5-a0b3-219d68d05fae] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.028994906s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-017503
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-017503: (6.045516509s)
--- PASS: TestAddons/parallel/InspektorGadget (11.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 15.920555ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fddts" [ab3f5500-78ac-4527-8c6c-14bb704c1776] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.035951052s
addons_test.go:414: (dbg) Run:  kubectl --context addons-017503 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-017503 addons disable metrics-server --alsologtostderr -v=1: (1.137558935s)
--- PASS: TestAddons/parallel/MetricsServer (6.27s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.776931ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-fsm8w" [a4b7ae94-5444-4e8c-ac6f-6845ad6b7ec4] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0135281s
addons_test.go:472: (dbg) Run:  kubectl --context addons-017503 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-017503 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.191833646s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 23.459407ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-017503 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-017503 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9db45626-e66f-422f-a986-be0aa7e2de0c] Pending
helpers_test.go:344: "task-pv-pod" [9db45626-e66f-422f-a986-be0aa7e2de0c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9db45626-e66f-422f-a986-be0aa7e2de0c] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.019598181s
addons_test.go:583: (dbg) Run:  kubectl --context addons-017503 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-017503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-017503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-017503 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-017503 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-017503 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-017503 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-017503 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ba1c4b4d-b48c-4af1-b121-4dcb49075863] Pending
helpers_test.go:344: "task-pv-pod-restore" [ba1c4b4d-b48c-4af1-b121-4dcb49075863] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ba1c4b4d-b48c-4af1-b121-4dcb49075863] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.025349826s
addons_test.go:625: (dbg) Run:  kubectl --context addons-017503 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-017503 delete pod task-pv-pod-restore: (1.10701041s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-017503 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-017503 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-017503 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.679354594s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-017503 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-017503 --alsologtostderr -v=1: (1.12553735s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-gt7xq" [440f1395-4e91-4cb8-bf4e-af997fea5537] Pending
helpers_test.go:344: "headlamp-777fd4b855-gt7xq" [440f1395-4e91-4cb8-bf4e-af997fea5537] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-gt7xq" [440f1395-4e91-4cb8-bf4e-af997fea5537] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.040700083s
--- PASS: TestAddons/parallel/Headlamp (16.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-h68mt" [000a24c0-992b-41ca-98fd-ad6302689669] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.027598185s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-017503
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (64.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-017503 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-017503 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cc65cf69-ed45-448d-bfaf-d1f2dfbf737c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cc65cf69-ed45-448d-bfaf-d1f2dfbf737c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cc65cf69-ed45-448d-bfaf-d1f2dfbf737c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 11.021698786s
addons_test.go:890: (dbg) Run:  kubectl --context addons-017503 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 ssh "cat /opt/local-path-provisioner/pvc-da592b37-cbf6-4d35-a9e9-1be308d7dd95_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-017503 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-017503 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-017503 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-017503 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.257131731s)
--- PASS: TestAddons/parallel/LocalPath (64.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z6fpd" [0868e323-f83b-484c-9f2a-e3ccf2cb1157] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014791436s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-017503
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-017503 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-017503 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-017503
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-017503: (13.105005208s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-017503
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-017503
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-017503
--- PASS: TestAddons/StoppedEnableDisable (13.40s)

                                                
                                    
x
+
TestCertOptions (87.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-306712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-306712 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m26.122769047s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-306712 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E1114 14:17:50.836553   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-306712 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-306712 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-306712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-306712
--- PASS: TestCertOptions (87.63s)

                                                
                                    
x
+
TestCertExpiration (293.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-206494 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-206494 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m20.218723823s)
E1114 14:16:07.672246   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-206494 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-206494 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (31.943736181s)
helpers_test.go:175: Cleaning up "cert-expiration-206494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-206494
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-206494: (1.092156546s)
--- PASS: TestCertExpiration (293.26s)

                                                
                                    
x
+
TestDockerFlags (80.85s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-921205 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-921205 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m19.168576059s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-921205 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-921205 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-921205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-921205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-921205: (1.120755817s)
--- PASS: TestDockerFlags (80.85s)

                                                
                                    
x
+
TestForceSystemdFlag (90.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-299588 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-299588 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m27.937215703s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-299588 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-299588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-299588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-299588: (1.828140934s)
--- PASS: TestForceSystemdFlag (90.07s)

                                                
                                    
x
+
TestForceSystemdEnv (63.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-271859 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-271859 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m1.493009524s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-271859 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-271859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-271859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-271859: (1.338780112s)
--- PASS: TestForceSystemdEnv (63.15s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                    
x
+
TestErrorSpam/setup (48.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-057883 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-057883 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-057883 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-057883 --driver=kvm2 : (48.475108167s)
--- PASS: TestErrorSpam/setup (48.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (12.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 stop: (12.437192599s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-057883 --log_dir /tmp/nospam-057883 stop
--- PASS: TestErrorSpam/stop (12.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17581-6041/.minikube/files/etc/test/nested/copy/13238/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (102.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-912212 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m42.57437317s)
--- PASS: TestFunctional/serial/StartWithProxy (102.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --alsologtostderr -v=8
E1114 13:42:50.836782   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:50.842611   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:50.852841   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:50.873832   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:50.914149   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:50.994891   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:51.155148   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:51.475826   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:52.116659   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:53.397366   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:42:55.957552   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:43:01.078516   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-912212 --alsologtostderr -v=8: (37.120842267s)
functional_test.go:659: soft start took 37.121409287s for "functional-912212" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-912212 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-912212 /tmp/TestFunctionalserialCacheCmdcacheadd_local1786825418/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache add minikube-local-cache-test:functional-912212
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache delete minikube-local-cache-test:functional-912212
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-912212
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (248.379403ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 kubectl -- --context functional-912212 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-912212 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1114 13:43:11.318786   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:43:31.799862   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-912212 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.922971584s)
functional_test.go:757: restart took 42.92310718s for "functional-912212" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-912212 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 logs: (1.13166121s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 logs --file /tmp/TestFunctionalserialLogsFileCmd1179055042/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 logs --file /tmp/TestFunctionalserialLogsFileCmd1179055042/001/logs.txt: (1.151640525s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-912212 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-912212
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-912212: exit status 115 (311.241519ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.25:32056 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-912212 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 config get cpus: exit status 14 (83.76585ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 config get cpus: exit status 14 (63.940563ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (36.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-912212 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-912212 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 20729: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (36.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-912212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (152.745998ms)

                                                
                                                
-- stdout --
	* [functional-912212] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:44:14.508124   20351 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:14.508330   20351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:14.508340   20351 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:14.508345   20351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:14.508545   20351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:44:14.509073   20351 out.go:303] Setting JSON to false
	I1114 13:44:14.510090   20351 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1605,"bootTime":1699967850,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:44:14.510149   20351 start.go:138] virtualization: kvm guest
	I1114 13:44:14.512116   20351 out.go:177] * [functional-912212] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1114 13:44:14.513995   20351 notify.go:220] Checking for updates...
	I1114 13:44:14.514001   20351 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:44:14.515299   20351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:44:14.516587   20351 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:44:14.517874   20351 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:44:14.519267   20351 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 13:44:14.520703   20351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:44:14.522373   20351 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:44:14.522808   20351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:44:14.522857   20351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:44:14.537443   20351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I1114 13:44:14.537850   20351 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:44:14.538376   20351 main.go:141] libmachine: Using API Version  1
	I1114 13:44:14.538400   20351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:44:14.538668   20351 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:44:14.538877   20351 main.go:141] libmachine: (functional-912212) Calling .DriverName
	I1114 13:44:14.539144   20351 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:44:14.539427   20351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:44:14.539471   20351 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:44:14.553799   20351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I1114 13:44:14.554244   20351 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:44:14.554809   20351 main.go:141] libmachine: Using API Version  1
	I1114 13:44:14.554829   20351 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:44:14.555146   20351 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:44:14.555319   20351 main.go:141] libmachine: (functional-912212) Calling .DriverName
	I1114 13:44:14.590447   20351 out.go:177] * Using the kvm2 driver based on existing profile
	I1114 13:44:14.591718   20351 start.go:298] selected driver: kvm2
	I1114 13:44:14.591732   20351 start.go:902] validating driver "kvm2" against &{Name:functional-912212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-912212 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:44:14.591882   20351 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:44:14.593851   20351 out.go:177] 
	W1114 13:44:14.595552   20351 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1114 13:44:14.597000   20351 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-912212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-912212 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (167.870901ms)

                                                
                                                
-- stdout --
	* [functional-912212] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:44:14.662948   20386 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:14.663097   20386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:14.663109   20386 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:14.663116   20386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:14.663479   20386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:44:14.664014   20386 out.go:303] Setting JSON to false
	I1114 13:44:14.665109   20386 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1605,"bootTime":1699967850,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1114 13:44:14.665196   20386 start.go:138] virtualization: kvm guest
	I1114 13:44:14.667518   20386 out.go:177] * [functional-912212] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1114 13:44:14.669110   20386 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:44:14.669097   20386 notify.go:220] Checking for updates...
	I1114 13:44:14.670500   20386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:44:14.671819   20386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	I1114 13:44:14.673175   20386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	I1114 13:44:14.674432   20386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1114 13:44:14.676949   20386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:44:14.678832   20386 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:44:14.679525   20386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:44:14.679575   20386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:44:14.698931   20386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38567
	I1114 13:44:14.699449   20386 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:44:14.700041   20386 main.go:141] libmachine: Using API Version  1
	I1114 13:44:14.700066   20386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:44:14.700396   20386 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:44:14.700524   20386 main.go:141] libmachine: (functional-912212) Calling .DriverName
	I1114 13:44:14.700734   20386 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:44:14.701126   20386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:44:14.701167   20386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:44:14.718306   20386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1114 13:44:14.718822   20386 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:44:14.719366   20386 main.go:141] libmachine: Using API Version  1
	I1114 13:44:14.719408   20386 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:44:14.719736   20386 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:44:14.719938   20386 main.go:141] libmachine: (functional-912212) Calling .DriverName
	I1114 13:44:14.753858   20386 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1114 13:44:14.754968   20386 start.go:298] selected driver: kvm2
	I1114 13:44:14.754982   20386 start.go:902] validating driver "kvm2" against &{Name:functional-912212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17581/minikube-v1.32.1-1699648094-17581-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-912212 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:44:14.755087   20386 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:44:14.756944   20386 out.go:177] 
	W1114 13:44:14.758304   20386 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1114 13:44:14.759647   20386 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-912212 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-912212 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-ml766" [a2b1c839-9bec-4ba0-929d-35ef80eacd79] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-ml766" [a2b1c839-9bec-4ba0-929d-35ef80eacd79] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.023664541s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.25:31314
functional_test.go:1674: http://192.168.39.25:31314: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-ml766

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.25:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.25:31314
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 addons list -o json
E1114 13:44:12.760538   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3ca605c8-a503-4fa5-a953-b0e803b5a2b5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014897687s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-912212 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-912212 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-912212 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-912212 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-912212 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f783c87c-5b36-4928-a84c-246b239f3ed6] Pending
helpers_test.go:344: "sp-pod" [f783c87c-5b36-4928-a84c-246b239f3ed6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f783c87c-5b36-4928-a84c-246b239f3ed6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.022808539s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-912212 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-912212 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-912212 delete -f testdata/storage-provisioner/pod.yaml: (1.774834188s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-912212 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b935611-db50-4995-bb58-7ce6bb46509b] Pending
helpers_test.go:344: "sp-pod" [4b935611-db50-4995-bb58-7ce6bb46509b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b935611-db50-4995-bb58-7ce6bb46509b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.013805417s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-912212 exec sp-pod -- ls /tmp/mount
2023/11/14 13:44:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh -n functional-912212 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 cp functional-912212:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3839913371/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh -n functional-912212 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-912212 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-4pds5" [69a061e5-54bd-4917-9319-176894a82dee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-4pds5" [69a061e5-54bd-4917-9319-176894a82dee] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.071629195s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;": exit status 1 (210.325885ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;": exit status 1 (319.265327ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;": exit status 1 (198.57388ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-912212 exec mysql-859648c796-4pds5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13238/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /etc/test/nested/copy/13238/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13238.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /etc/ssl/certs/13238.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13238.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /usr/share/ca-certificates/13238.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/132382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /etc/ssl/certs/132382.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/132382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /usr/share/ca-certificates/132382.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-912212 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh "sudo systemctl is-active crio": exit status 1 (245.121957ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-912212 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-912212 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-sx9ns" [9af42eac-d25d-40af-8d80-7a47dd3bbc68] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-sx9ns" [9af42eac-d25d-40af-8d80-7a47dd3bbc68] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.038986447s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "279.066496ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "79.209067ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdany-port1387033860/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699969439610398098" to /tmp/TestFunctionalparallelMountCmdany-port1387033860/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699969439610398098" to /tmp/TestFunctionalparallelMountCmdany-port1387033860/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699969439610398098" to /tmp/TestFunctionalparallelMountCmdany-port1387033860/001/test-1699969439610398098
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.772425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 test-1699969439610398098
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh cat /mount-9p/test-1699969439610398098
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-912212 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e3fccc98-8f8b-4dc2-b6b3-c8c329041212] Pending
helpers_test.go:344: "busybox-mount" [e3fccc98-8f8b-4dc2-b6b3-c8c329041212] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e3fccc98-8f8b-4dc2-b6b3-c8c329041212] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e3fccc98-8f8b-4dc2-b6b3-c8c329041212] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.023580635s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-912212 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdany-port1387033860/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "309.443956ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.73333ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 version --short
--- PASS: TestFunctional/parallel/Version/short (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-912212 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-912212
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-912212
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-912212 image ls --format short --alsologtostderr:
I1114 13:44:25.873281   20860 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:25.873404   20860 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:25.873413   20860 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:25.873418   20860 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:25.873629   20860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
I1114 13:44:25.874209   20860 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:25.874310   20860 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:25.874679   20860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:25.874730   20860 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:25.888618   20860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
I1114 13:44:25.889040   20860 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:25.889597   20860 main.go:141] libmachine: Using API Version  1
I1114 13:44:25.889626   20860 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:25.889956   20860 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:25.890148   20860 main.go:141] libmachine: (functional-912212) Calling .GetState
I1114 13:44:25.891778   20860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:25.891829   20860 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:25.906756   20860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36015
I1114 13:44:25.907176   20860 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:25.907772   20860 main.go:141] libmachine: Using API Version  1
I1114 13:44:25.907813   20860 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:25.908219   20860 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:25.908454   20860 main.go:141] libmachine: (functional-912212) Calling .DriverName
I1114 13:44:25.908678   20860 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:25.908701   20860 main.go:141] libmachine: (functional-912212) Calling .GetSSHHostname
I1114 13:44:25.911897   20860 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:25.912299   20860 main.go:141] libmachine: (functional-912212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:04:06", ip: ""} in network mk-functional-912212: {Iface:virbr1 ExpiryTime:2023-11-14 14:40:59 +0000 UTC Type:0 Mac:52:54:00:98:04:06 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-912212 Clientid:01:52:54:00:98:04:06}
I1114 13:44:25.912343   20860 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined IP address 192.168.39.25 and MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:25.912519   20860 main.go:141] libmachine: (functional-912212) Calling .GetSSHPort
I1114 13:44:25.912691   20860 main.go:141] libmachine: (functional-912212) Calling .GetSSHKeyPath
I1114 13:44:25.912879   20860 main.go:141] libmachine: (functional-912212) Calling .GetSSHUsername
I1114 13:44:25.913031   20860 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/functional-912212/id_rsa Username:docker}
I1114 13:44:26.013907   20860 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1114 13:44:26.091808   20860 main.go:141] libmachine: Making call to close driver server
I1114 13:44:26.091820   20860 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:26.092090   20860 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:26.092149   20860 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:26.092158   20860 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:26.092174   20860 main.go:141] libmachine: Making call to close driver server
I1114 13:44:26.092183   20860 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:26.092401   20860 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:26.092419   20860 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:26.092437   20860 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-912212 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-912212 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-912212 | 5af981a7d8a1e | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-912212 | 7009aad11a68e | 30B    |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | c20060033e06f | 187MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-912212 image ls --format table --alsologtostderr:
I1114 13:44:30.512401   21040 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:30.512529   21040 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:30.512538   21040 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:30.512548   21040 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:30.512731   21040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
I1114 13:44:30.513296   21040 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:30.513417   21040 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:30.513868   21040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:30.513915   21040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:30.528288   21040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38015
I1114 13:44:30.528693   21040 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:30.529309   21040 main.go:141] libmachine: Using API Version  1
I1114 13:44:30.529339   21040 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:30.529672   21040 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:30.529845   21040 main.go:141] libmachine: (functional-912212) Calling .GetState
I1114 13:44:30.531558   21040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:30.531596   21040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:30.545974   21040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46399
I1114 13:44:30.546354   21040 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:30.546839   21040 main.go:141] libmachine: Using API Version  1
I1114 13:44:30.546865   21040 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:30.547185   21040 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:30.547346   21040 main.go:141] libmachine: (functional-912212) Calling .DriverName
I1114 13:44:30.547545   21040 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:30.547568   21040 main.go:141] libmachine: (functional-912212) Calling .GetSSHHostname
I1114 13:44:30.550353   21040 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:30.550744   21040 main.go:141] libmachine: (functional-912212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:04:06", ip: ""} in network mk-functional-912212: {Iface:virbr1 ExpiryTime:2023-11-14 14:40:59 +0000 UTC Type:0 Mac:52:54:00:98:04:06 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-912212 Clientid:01:52:54:00:98:04:06}
I1114 13:44:30.550768   21040 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined IP address 192.168.39.25 and MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:30.550958   21040 main.go:141] libmachine: (functional-912212) Calling .GetSSHPort
I1114 13:44:30.551121   21040 main.go:141] libmachine: (functional-912212) Calling .GetSSHKeyPath
I1114 13:44:30.551285   21040 main.go:141] libmachine: (functional-912212) Calling .GetSSHUsername
I1114 13:44:30.551430   21040 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/functional-912212/id_rsa Username:docker}
I1114 13:44:30.648170   21040 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1114 13:44:30.692002   21040 main.go:141] libmachine: Making call to close driver server
I1114 13:44:30.692018   21040 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:30.692292   21040 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:30.692384   21040 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:30.692410   21040 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:30.692422   21040 main.go:141] libmachine: Making call to close driver server
I1114 13:44:30.692431   21040 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:30.692752   21040 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:30.692792   21040 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:30.692808   21040 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-912212 image ls --format json --alsologtostderr:
[{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5af981a7d8a1ef3a954666309fc7ee2a8811340f23681463ad0fb20b275a7321","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-912212"],"size":"1240000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
"repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube
/busybox:latest"],"size":"1240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"7009aad11a68eecfc49080a45ac8dcb4c23cc1db771ad5d0379969ebccb1bed8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-912212"],"size":"30"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-912212"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-912212 image ls --format json --alsologtostderr:
I1114 13:44:30.270787   21016 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:30.270912   21016 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:30.270923   21016 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:30.270927   21016 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:30.271107   21016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
I1114 13:44:30.271727   21016 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:30.271843   21016 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:30.272235   21016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:30.272289   21016 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:30.286946   21016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
I1114 13:44:30.287504   21016 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:30.288117   21016 main.go:141] libmachine: Using API Version  1
I1114 13:44:30.288144   21016 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:30.288498   21016 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:30.288684   21016 main.go:141] libmachine: (functional-912212) Calling .GetState
I1114 13:44:30.290792   21016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:30.290859   21016 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:30.305491   21016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
I1114 13:44:30.305913   21016 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:30.306489   21016 main.go:141] libmachine: Using API Version  1
I1114 13:44:30.306515   21016 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:30.306856   21016 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:30.307064   21016 main.go:141] libmachine: (functional-912212) Calling .DriverName
I1114 13:44:30.307274   21016 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:30.307294   21016 main.go:141] libmachine: (functional-912212) Calling .GetSSHHostname
I1114 13:44:30.310582   21016 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:30.311108   21016 main.go:141] libmachine: (functional-912212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:04:06", ip: ""} in network mk-functional-912212: {Iface:virbr1 ExpiryTime:2023-11-14 14:40:59 +0000 UTC Type:0 Mac:52:54:00:98:04:06 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-912212 Clientid:01:52:54:00:98:04:06}
I1114 13:44:30.311148   21016 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined IP address 192.168.39.25 and MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:30.311340   21016 main.go:141] libmachine: (functional-912212) Calling .GetSSHPort
I1114 13:44:30.311515   21016 main.go:141] libmachine: (functional-912212) Calling .GetSSHKeyPath
I1114 13:44:30.311671   21016 main.go:141] libmachine: (functional-912212) Calling .GetSSHUsername
I1114 13:44:30.311801   21016 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/functional-912212/id_rsa Username:docker}
I1114 13:44:30.408304   21016 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1114 13:44:30.445454   21016 main.go:141] libmachine: Making call to close driver server
I1114 13:44:30.445476   21016 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:30.445799   21016 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:30.445836   21016 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:30.445854   21016 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:30.445873   21016 main.go:141] libmachine: Making call to close driver server
I1114 13:44:30.445885   21016 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:30.446111   21016 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:30.446132   21016 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-912212 image ls --format yaml --alsologtostderr:
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-912212
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 7009aad11a68eecfc49080a45ac8dcb4c23cc1db771ad5d0379969ebccb1bed8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-912212
size: "30"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-912212 image ls --format yaml --alsologtostderr:
I1114 13:44:26.162991   20894 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:26.163177   20894 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:26.163190   20894 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:26.163197   20894 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:26.163477   20894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
I1114 13:44:26.164291   20894 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:26.164468   20894 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:26.165155   20894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:26.165210   20894 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:26.179964   20894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
I1114 13:44:26.180427   20894 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:26.181080   20894 main.go:141] libmachine: Using API Version  1
I1114 13:44:26.181105   20894 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:26.181420   20894 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:26.181648   20894 main.go:141] libmachine: (functional-912212) Calling .GetState
I1114 13:44:26.183706   20894 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:26.183760   20894 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:26.197750   20894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
I1114 13:44:26.198200   20894 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:26.198745   20894 main.go:141] libmachine: Using API Version  1
I1114 13:44:26.198776   20894 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:26.199136   20894 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:26.199352   20894 main.go:141] libmachine: (functional-912212) Calling .DriverName
I1114 13:44:26.199570   20894 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:26.199612   20894 main.go:141] libmachine: (functional-912212) Calling .GetSSHHostname
I1114 13:44:26.203051   20894 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:26.203506   20894 main.go:141] libmachine: (functional-912212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:04:06", ip: ""} in network mk-functional-912212: {Iface:virbr1 ExpiryTime:2023-11-14 14:40:59 +0000 UTC Type:0 Mac:52:54:00:98:04:06 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-912212 Clientid:01:52:54:00:98:04:06}
I1114 13:44:26.203539   20894 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined IP address 192.168.39.25 and MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:26.203667   20894 main.go:141] libmachine: (functional-912212) Calling .GetSSHPort
I1114 13:44:26.203856   20894 main.go:141] libmachine: (functional-912212) Calling .GetSSHKeyPath
I1114 13:44:26.204043   20894 main.go:141] libmachine: (functional-912212) Calling .GetSSHUsername
I1114 13:44:26.204189   20894 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/functional-912212/id_rsa Username:docker}
I1114 13:44:26.300106   20894 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1114 13:44:26.397697   20894 main.go:141] libmachine: Making call to close driver server
I1114 13:44:26.397713   20894 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:26.397974   20894 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:26.397991   20894 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:26.398024   20894 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:26.398045   20894 main.go:141] libmachine: Making call to close driver server
I1114 13:44:26.398058   20894 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:26.398322   20894 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:26.398361   20894 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:26.398379   20894 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh pgrep buildkitd: exit status 1 (252.629525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image build -t localhost/my-image:functional-912212 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image build -t localhost/my-image:functional-912212 testdata/build --alsologtostderr: (3.316073638s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-912212 image build -t localhost/my-image:functional-912212 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 89f4a3bd6cd0
Removing intermediate container 89f4a3bd6cd0
---> 043e0b17a4d2
Step 3/3 : ADD content.txt /
---> 5af981a7d8a1
Successfully built 5af981a7d8a1
Successfully tagged localhost/my-image:functional-912212
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-912212 image build -t localhost/my-image:functional-912212 testdata/build --alsologtostderr:
I1114 13:44:26.712589   20948 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:26.712748   20948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:26.712757   20948 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:26.712761   20948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:26.712951   20948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
I1114 13:44:26.713579   20948 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:26.714169   20948 config.go:182] Loaded profile config "functional-912212": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1114 13:44:26.714558   20948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:26.714617   20948 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:26.728967   20948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
I1114 13:44:26.729418   20948 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:26.729976   20948 main.go:141] libmachine: Using API Version  1
I1114 13:44:26.729999   20948 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:26.730398   20948 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:26.730610   20948 main.go:141] libmachine: (functional-912212) Calling .GetState
I1114 13:44:26.732588   20948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1114 13:44:26.732643   20948 main.go:141] libmachine: Launching plugin server for driver kvm2
I1114 13:44:26.746779   20948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
I1114 13:44:26.747171   20948 main.go:141] libmachine: () Calling .GetVersion
I1114 13:44:26.747677   20948 main.go:141] libmachine: Using API Version  1
I1114 13:44:26.747705   20948 main.go:141] libmachine: () Calling .SetConfigRaw
I1114 13:44:26.748011   20948 main.go:141] libmachine: () Calling .GetMachineName
I1114 13:44:26.748198   20948 main.go:141] libmachine: (functional-912212) Calling .DriverName
I1114 13:44:26.748387   20948 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:26.748420   20948 main.go:141] libmachine: (functional-912212) Calling .GetSSHHostname
I1114 13:44:26.750996   20948 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:26.751419   20948 main.go:141] libmachine: (functional-912212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:04:06", ip: ""} in network mk-functional-912212: {Iface:virbr1 ExpiryTime:2023-11-14 14:40:59 +0000 UTC Type:0 Mac:52:54:00:98:04:06 Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-912212 Clientid:01:52:54:00:98:04:06}
I1114 13:44:26.751452   20948 main.go:141] libmachine: (functional-912212) DBG | domain functional-912212 has defined IP address 192.168.39.25 and MAC address 52:54:00:98:04:06 in network mk-functional-912212
I1114 13:44:26.751615   20948 main.go:141] libmachine: (functional-912212) Calling .GetSSHPort
I1114 13:44:26.751785   20948 main.go:141] libmachine: (functional-912212) Calling .GetSSHKeyPath
I1114 13:44:26.752023   20948 main.go:141] libmachine: (functional-912212) Calling .GetSSHUsername
I1114 13:44:26.752218   20948 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/functional-912212/id_rsa Username:docker}
I1114 13:44:26.856839   20948 build_images.go:151] Building image from path: /tmp/build.528456562.tar
I1114 13:44:26.856916   20948 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1114 13:44:26.872985   20948 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.528456562.tar
I1114 13:44:26.877891   20948 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.528456562.tar: stat -c "%s %y" /var/lib/minikube/build/build.528456562.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.528456562.tar': No such file or directory
I1114 13:44:26.877935   20948 ssh_runner.go:362] scp /tmp/build.528456562.tar --> /var/lib/minikube/build/build.528456562.tar (3072 bytes)
I1114 13:44:26.923294   20948 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.528456562
I1114 13:44:26.943508   20948 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.528456562 -xf /var/lib/minikube/build/build.528456562.tar
I1114 13:44:26.959132   20948 docker.go:346] Building image: /var/lib/minikube/build/build.528456562
I1114 13:44:26.959197   20948 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-912212 /var/lib/minikube/build/build.528456562
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1114 13:44:29.946810   20948 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-912212 /var/lib/minikube/build/build.528456562: (2.987588041s)
I1114 13:44:29.946905   20948 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.528456562
I1114 13:44:29.956896   20948 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.528456562.tar
I1114 13:44:29.969970   20948 build_images.go:207] Built localhost/my-image:functional-912212 from /tmp/build.528456562.tar
I1114 13:44:29.970005   20948 build_images.go:123] succeeded building to: functional-912212
I1114 13:44:29.970010   20948 build_images.go:124] failed building to: 
I1114 13:44:29.970101   20948 main.go:141] libmachine: Making call to close driver server
I1114 13:44:29.970149   20948 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:29.970446   20948 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:29.970465   20948 main.go:141] libmachine: Making call to close connection to plugin binary
I1114 13:44:29.970479   20948 main.go:141] libmachine: Making call to close driver server
I1114 13:44:29.970502   20948 main.go:141] libmachine: (functional-912212) Calling .Close
I1114 13:44:29.970504   20948 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:29.970768   20948 main.go:141] libmachine: (functional-912212) DBG | Closing plugin on server side
I1114 13:44:29.970802   20948 main.go:141] libmachine: Successfully made call to close driver server
I1114 13:44:29.970816   20948 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.443398865s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-912212
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr: (4.039306072s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr: (2.173845527s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.259457356s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-912212
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image load --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr: (4.588226231s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdspecific-port587124192/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.466374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdspecific-port587124192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh "sudo umount -f /mount-9p": exit status 1 (246.069923ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-912212 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdspecific-port587124192/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service list -o json
functional_test.go:1493: Took "326.97685ms" to run "out/minikube-linux-amd64 -p functional-912212 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.25:30772
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T" /mount1: exit status 1 (429.928778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-912212 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-912212 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2873905139/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.25:30772
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-912212 docker-env) && out/minikube-linux-amd64 status -p functional-912212"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-912212 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image save gcr.io/google-containers/addon-resizer:functional-912212 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image save gcr.io/google-containers/addon-resizer:functional-912212 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.216090594s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image rm gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.639050374s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-912212
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-912212 image save --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-912212 image save --daemon gcr.io/google-containers/addon-resizer:functional-912212 --alsologtostderr: (2.005451728s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-912212
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.04s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-912212
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-912212
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-912212
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (342.62s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-480871 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-480871 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m3.708089034s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-480871 cache add gcr.io/k8s-minikube/gvisor-addon:2
E1114 14:12:50.835827   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-480871 cache add gcr.io/k8s-minikube/gvisor-addon:2: (28.583883877s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-480871 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-480871 addons enable gvisor: (5.357860629s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9fb39cc7-49b1-4a3c-996d-82e1f747aa98] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.02296043s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-480871 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [bd409d1e-c852-45cb-995f-c649802f8e1b] Pending
helpers_test.go:344: "nginx-gvisor" [bd409d1e-c852-45cb-995f-c649802f8e1b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [bd409d1e-c852-45cb-995f-c649802f8e1b] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.022406636s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-480871
E1114 14:13:58.751852   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-480871: (1m33.287930285s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-480871 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-480871 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m1.129301241s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [9fb39cc7-49b1-4a3c-996d-82e1f747aa98] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.033634378s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [bd409d1e-c852-45cb-995f-c649802f8e1b] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.0124682s
helpers_test.go:175: Cleaning up "gvisor-480871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-480871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-480871: (1.20371795s)
--- PASS: TestGvisorAddon (342.62s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-955805 --driver=kvm2 
E1114 13:45:34.681529   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-955805 --driver=kvm2 : (52.058164721s)
--- PASS: TestImageBuild/serial/Setup (52.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-955805
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-955805: (1.614162728s)
--- PASS: TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-955805
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-955805: (1.301200636s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.30s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-955805
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-955805
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-736725 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-736725 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m12.808346459s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons enable ingress --alsologtostderr -v=5: (18.477368277s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (41.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-736725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-736725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.625841237s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-736725 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-736725 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [279ec24b-27e1-450e-b3ec-eea72f5665df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [279ec24b-27e1-450e-b3ec-eea72f5665df] Running
E1114 13:47:50.835955   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.017702554s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-736725 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.220
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons disable ingress-dns --alsologtostderr -v=1: (9.171691311s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-736725 addons disable ingress --alsologtostderr -v=1: (7.550067388s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (41.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-168821 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1114 13:48:18.522298   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:48:58.751425   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:58.756693   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:58.766981   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:58.787279   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:58.827544   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:58.907902   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:59.068429   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:48:59.389092   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:49:00.030244   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:49:01.311139   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:49:03.872905   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:49:08.993144   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:49:19.233997   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-168821 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m7.983106671s)
--- PASS: TestJSONOutput/start/Command (67.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-168821 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-168821 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-168821 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-168821 --output=json --user=testUser: (8.103685696s)
--- PASS: TestJSONOutput/stop/Command (8.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-499533 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-499533 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.140252ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09f44b09-fb74-4d12-8dbd-ee03f883e813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-499533] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f047c24e-e912-438b-8243-9bb0f76fdb7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17581"}}
	{"specversion":"1.0","id":"3be39bb1-fcb4-4c0c-a223-1773938a5bda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6b0d752-173b-403a-84f8-454b37a7ab34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig"}}
	{"specversion":"1.0","id":"0662a2b4-5fcd-4dbe-9262-d5dac53c0cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube"}}
	{"specversion":"1.0","id":"2ec118d2-c6e5-4a15-97df-0878fbe80d58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"47366080-f5ae-4ada-8857-a6c759ad1799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3704930c-0151-4e24-89de-6d74ef767d84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-499533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-499533
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (104.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-455088 --driver=kvm2 
E1114 13:49:39.714721   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:50:20.675744   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-455088 --driver=kvm2 : (51.449519779s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-458048 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-458048 --driver=kvm2 : (49.993715558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-455088
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-458048
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-458048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-458048
helpers_test.go:175: Cleaning up "first-455088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-455088
--- PASS: TestMinikubeProfile (104.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-635786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-635786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.224397162s)
E1114 13:51:42.595976   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (29.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-635786 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-635786 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-653279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-653279 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.903291812s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-635786 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-653279
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-653279: (2.254747989s)
--- PASS: TestMountStart/serial/Stop (2.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.83s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-653279
E1114 13:52:28.731084   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:28.736442   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:28.746724   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:28.767173   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:28.807476   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:28.887862   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:29.048309   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:29.368889   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:30.009859   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:31.290396   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:33.852206   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:38.973219   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-653279: (23.833107285s)
--- PASS: TestMountStart/serial/RestartStopped (24.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-653279 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-661456 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1114 13:52:49.214299   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:52:50.835813   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 13:53:09.695498   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:53:50.655827   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 13:53:58.751846   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 13:54:26.437483   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-661456 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m18.475582729s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-661456 -- rollout status deployment/busybox: (3.376550526s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-tx7cv -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-wrrkq -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-tx7cv -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-wrrkq -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-tx7cv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-wrrkq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-tx7cv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-tx7cv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-wrrkq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-661456 -- exec busybox-5bc68d56bd-wrrkq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-661456 -v 3 --alsologtostderr
E1114 13:55:12.576031   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-661456 -v 3 --alsologtostderr: (50.573014415s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp testdata/cp-test.txt multinode-661456:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422828335/001/cp-test_multinode-661456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456:/home/docker/cp-test.txt multinode-661456-m02:/home/docker/cp-test_multinode-661456_multinode-661456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test_multinode-661456_multinode-661456-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456:/home/docker/cp-test.txt multinode-661456-m03:/home/docker/cp-test_multinode-661456_multinode-661456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test_multinode-661456_multinode-661456-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp testdata/cp-test.txt multinode-661456-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422828335/001/cp-test_multinode-661456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m02:/home/docker/cp-test.txt multinode-661456:/home/docker/cp-test_multinode-661456-m02_multinode-661456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test_multinode-661456-m02_multinode-661456.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m02:/home/docker/cp-test.txt multinode-661456-m03:/home/docker/cp-test_multinode-661456-m02_multinode-661456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test_multinode-661456-m02_multinode-661456-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp testdata/cp-test.txt multinode-661456-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422828335/001/cp-test_multinode-661456-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt multinode-661456:/home/docker/cp-test_multinode-661456-m03_multinode-661456.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456 "sudo cat /home/docker/cp-test_multinode-661456-m03_multinode-661456.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 cp multinode-661456-m03:/home/docker/cp-test.txt multinode-661456-m02:/home/docker/cp-test_multinode-661456-m03_multinode-661456-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 ssh -n multinode-661456-m02 "sudo cat /home/docker/cp-test_multinode-661456-m03_multinode-661456-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-661456 node stop m03: (3.100337818s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 status: exit status 7 (470.483523ms)

                                                
                                                
-- stdout --
	multinode-661456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-661456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-661456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-661456 status --alsologtostderr: exit status 7 (481.038675ms)

                                                
                                                
-- stdout --
	multinode-661456
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-661456-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-661456-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:56:10.416599   28103 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:56:10.416740   28103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:56:10.416749   28103 out.go:309] Setting ErrFile to fd 2...
	I1114 13:56:10.416753   28103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:56:10.416955   28103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-6041/.minikube/bin
	I1114 13:56:10.417125   28103 out.go:303] Setting JSON to false
	I1114 13:56:10.417153   28103 mustload.go:65] Loading cluster: multinode-661456
	I1114 13:56:10.417197   28103 notify.go:220] Checking for updates...
	I1114 13:56:10.417679   28103 config.go:182] Loaded profile config "multinode-661456": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1114 13:56:10.417698   28103 status.go:255] checking status of multinode-661456 ...
	I1114 13:56:10.418161   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.418216   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.432649   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I1114 13:56:10.433040   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.433584   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.433610   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.434000   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.434210   28103 main.go:141] libmachine: (multinode-661456) Calling .GetState
	I1114 13:56:10.435842   28103 status.go:330] multinode-661456 host status = "Running" (err=<nil>)
	I1114 13:56:10.435858   28103 host.go:66] Checking if "multinode-661456" exists ...
	I1114 13:56:10.436137   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.436173   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.450466   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1114 13:56:10.450841   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.451261   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.451280   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.451657   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.451819   28103 main.go:141] libmachine: (multinode-661456) Calling .GetIP
	I1114 13:56:10.454668   28103 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:56:10.455022   28103 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:52:58 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:56:10.455066   28103 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:56:10.455221   28103 host.go:66] Checking if "multinode-661456" exists ...
	I1114 13:56:10.455494   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.455537   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.469974   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35167
	I1114 13:56:10.470311   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.470690   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.470708   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.471031   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.471197   28103 main.go:141] libmachine: (multinode-661456) Calling .DriverName
	I1114 13:56:10.471380   28103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:56:10.471409   28103 main.go:141] libmachine: (multinode-661456) Calling .GetSSHHostname
	I1114 13:56:10.474098   28103 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:56:10.474557   28103 main.go:141] libmachine: (multinode-661456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f9:71:4b", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:52:58 +0000 UTC Type:0 Mac:52:54:00:f9:71:4b Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:multinode-661456 Clientid:01:52:54:00:f9:71:4b}
	I1114 13:56:10.474598   28103 main.go:141] libmachine: (multinode-661456) DBG | domain multinode-661456 has defined IP address 192.168.39.222 and MAC address 52:54:00:f9:71:4b in network mk-multinode-661456
	I1114 13:56:10.474717   28103 main.go:141] libmachine: (multinode-661456) Calling .GetSSHPort
	I1114 13:56:10.474948   28103 main.go:141] libmachine: (multinode-661456) Calling .GetSSHKeyPath
	I1114 13:56:10.475083   28103 main.go:141] libmachine: (multinode-661456) Calling .GetSSHUsername
	I1114 13:56:10.475204   28103 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456/id_rsa Username:docker}
	I1114 13:56:10.570713   28103 ssh_runner.go:195] Run: systemctl --version
	I1114 13:56:10.577173   28103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:56:10.595504   28103 kubeconfig.go:92] found "multinode-661456" server: "https://192.168.39.222:8443"
	I1114 13:56:10.595536   28103 api_server.go:166] Checking apiserver status ...
	I1114 13:56:10.595581   28103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:56:10.612225   28103 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1880/cgroup
	I1114 13:56:10.624572   28103 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/4037c5756e5b2593fb75b50681e41f786423036aa1b4d08b46422c089dbfae62"
	I1114 13:56:10.624642   28103 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod53c7ea94508e5c77038361438391a9cf/4037c5756e5b2593fb75b50681e41f786423036aa1b4d08b46422c089dbfae62/freezer.state
	I1114 13:56:10.636821   28103 api_server.go:204] freezer state: "THAWED"
	I1114 13:56:10.636852   28103 api_server.go:253] Checking apiserver healthz at https://192.168.39.222:8443/healthz ...
	I1114 13:56:10.644311   28103 api_server.go:279] https://192.168.39.222:8443/healthz returned 200:
	ok
	I1114 13:56:10.644341   28103 status.go:421] multinode-661456 apiserver status = Running (err=<nil>)
	I1114 13:56:10.644350   28103 status.go:257] multinode-661456 status: &{Name:multinode-661456 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:56:10.644366   28103 status.go:255] checking status of multinode-661456-m02 ...
	I1114 13:56:10.644724   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.644767   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.660727   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I1114 13:56:10.661132   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.661682   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.661704   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.662062   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.662284   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetState
	I1114 13:56:10.663932   28103 status.go:330] multinode-661456-m02 host status = "Running" (err=<nil>)
	I1114 13:56:10.663953   28103 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 13:56:10.664313   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.664348   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.678920   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
	I1114 13:56:10.679288   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.679688   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.679712   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.680005   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.680183   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetIP
	I1114 13:56:10.683120   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 13:56:10.683573   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:54:20 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 13:56:10.683602   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 13:56:10.683742   28103 host.go:66] Checking if "multinode-661456-m02" exists ...
	I1114 13:56:10.684038   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.684071   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.699033   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1114 13:56:10.699414   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.699907   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.699938   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.700251   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.700480   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .DriverName
	I1114 13:56:10.700654   28103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:56:10.700676   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHHostname
	I1114 13:56:10.703347   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 13:56:10.703811   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:e2:91", ip: ""} in network mk-multinode-661456: {Iface:virbr1 ExpiryTime:2023-11-14 14:54:20 +0000 UTC Type:0 Mac:52:54:00:17:e2:91 Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:multinode-661456-m02 Clientid:01:52:54:00:17:e2:91}
	I1114 13:56:10.703843   28103 main.go:141] libmachine: (multinode-661456-m02) DBG | domain multinode-661456-m02 has defined IP address 192.168.39.228 and MAC address 52:54:00:17:e2:91 in network mk-multinode-661456
	I1114 13:56:10.704027   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHPort
	I1114 13:56:10.704208   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHKeyPath
	I1114 13:56:10.704374   28103 main.go:141] libmachine: (multinode-661456-m02) Calling .GetSSHUsername
	I1114 13:56:10.704511   28103 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17581-6041/.minikube/machines/multinode-661456-m02/id_rsa Username:docker}
	I1114 13:56:10.799584   28103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:56:10.814214   28103 status.go:257] multinode-661456-m02 status: &{Name:multinode-661456-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:56:10.814263   28103 status.go:255] checking status of multinode-661456-m03 ...
	I1114 13:56:10.814623   28103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1114 13:56:10.814676   28103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1114 13:56:10.829533   28103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I1114 13:56:10.830036   28103 main.go:141] libmachine: () Calling .GetVersion
	I1114 13:56:10.830501   28103 main.go:141] libmachine: Using API Version  1
	I1114 13:56:10.830523   28103 main.go:141] libmachine: () Calling .SetConfigRaw
	I1114 13:56:10.830854   28103 main.go:141] libmachine: () Calling .GetMachineName
	I1114 13:56:10.831046   28103 main.go:141] libmachine: (multinode-661456-m03) Calling .GetState
	I1114 13:56:10.832651   28103 status.go:330] multinode-661456-m03 host status = "Stopped" (err=<nil>)
	I1114 13:56:10.832672   28103 status.go:343] host is not running, skipping remaining checks
	I1114 13:56:10.832680   28103 status.go:257] multinode-661456-m03 status: &{Name:multinode-661456-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-661456 node start m03 --alsologtostderr: (31.728432166s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-661456 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-661456
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-661456-m03 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-661456-m03 --driver=kvm2 : exit status 14 (79.414694ms)

                                                
                                                
-- stdout --
	* [multinode-661456-m03] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-661456-m03' is duplicated with machine name 'multinode-661456-m03' in profile 'multinode-661456'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-661456-m04 --driver=kvm2 
E1114 14:02:28.731232   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 14:02:50.836100   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-661456-m04 --driver=kvm2 : (52.460608965s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-661456
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-661456: exit status 80 (248.404277ms)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster multinode-661456
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-661456-m04 already exists in multinode-661456-m04 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-661456-m04
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.85s)

                                                
                                    
x
+
TestPreload (181.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-363639 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1114 14:03:58.751468   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-363639 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m36.472729838s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-363639 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-363639 image pull gcr.io/k8s-minikube/busybox: (1.321952464s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-363639
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-363639: (13.111654395s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-363639 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1114 14:05:21.798578   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-363639 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m9.956327411s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-363639 image list
helpers_test.go:175: Cleaning up "test-preload-363639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-363639
--- PASS: TestPreload (181.94s)

                                                
                                    
x
+
TestScheduledStopUnix (123.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-670522 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-670522 --memory=2048 --driver=kvm2 : (51.815057766s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-670522 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-670522 -n scheduled-stop-670522
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-670522 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-670522 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-670522 -n scheduled-stop-670522
E1114 14:07:28.728345   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-670522
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-670522 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1114 14:07:50.835918   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-670522
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-670522: exit status 7 (72.739912ms)

                                                
                                                
-- stdout --
	scheduled-stop-670522
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-670522 -n scheduled-stop-670522
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-670522 -n scheduled-stop-670522: exit status 7 (75.557816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-670522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-670522
--- PASS: TestScheduledStopUnix (123.68s)

                                                
                                    
x
+
TestSkaffold (143.04s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3223576223 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-754913 --memory=2600 --driver=kvm2 
E1114 14:08:51.777822   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
E1114 14:08:58.751806   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-754913 --memory=2600 --driver=kvm2 : (51.022270304s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3223576223 run --minikube-profile skaffold-754913 --kube-context skaffold-754913 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3223576223 run --minikube-profile skaffold-754913 --kube-context skaffold-754913 --status-check=true --port-forward=false --interactive=false: (1m20.346436192s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-59976bb75d-7b672" [170bdf0e-40bd-42dd-bfb4-194ca5700436] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.022578863s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-699599b8dd-qkl2g" [5a8c1e6a-dd67-4bc2-9a9d-b508e1cca989] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.011851818s
helpers_test.go:175: Cleaning up "skaffold-754913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-754913
--- PASS: TestSkaffold (143.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.3402170592.exe start -p running-upgrade-192389 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.3402170592.exe start -p running-upgrade-192389 --memory=2200 --vm-driver=kvm2 : (1m51.016322467s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-192389 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-192389 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m30.095051287s)
helpers_test.go:175: Cleaning up "running-upgrade-192389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-192389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-192389: (1.662573632s)
--- PASS: TestRunningBinaryUpgrade (203.13s)

                                                
                                    
x
+
TestKubernetesUpgrade (227.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m48.011817788s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-745402
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-745402: (13.144507573s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-745402 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-745402 status --format={{.Host}}: exit status 7 (99.382115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (47.034818878s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-745402 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (120.867796ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-745402] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-745402
	    minikube start -p kubernetes-upgrade-745402 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7454022 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-745402 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1114 14:15:26.709907   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:26.715212   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:26.725545   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:26.745925   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:26.786267   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:26.866649   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:27.027101   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:27.347694   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:27.988520   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:29.269522   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:31.830641   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:36.951161   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:47.191415   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:15:53.885554   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-745402 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (58.259023762s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-745402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-745402
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-745402: (1.207837986s)
--- PASS: TestKubernetesUpgrade (227.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (260.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3748034633.exe start -p stopped-upgrade-954293 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3748034633.exe start -p stopped-upgrade-954293 --memory=2200 --vm-driver=kvm2 : (2m20.328945895s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3748034633.exe -p stopped-upgrade-954293 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3748034633.exe -p stopped-upgrade-954293 stop: (13.097026946s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-954293 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1114 14:16:48.632504   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:17:28.727895   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-954293 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m47.095421194s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (260.52s)

                                                
                                    
x
+
TestPause/serial/Start (75.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-588967 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-588967 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m15.237399009s)
--- PASS: TestPause/serial/Start (75.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-588967 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-588967 --alsologtostderr -v=1 --driver=kvm2 : (1m5.584949851s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (65.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (78.234733ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-996869] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-6041/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-6041/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (69.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-996869 --driver=kvm2 
E1114 14:18:10.553045   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:18:19.680493   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.685806   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.696132   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.716422   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.756838   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.837702   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:19.998858   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:20.319490   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:18:20.960062   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-996869 --driver=kvm2 : (1m8.998250596s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-996869 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (69.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-954293
E1114 14:18:22.240299   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-954293: (1.642040027s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (112.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1114 14:18:29.921678   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m52.295401588s)
--- PASS: TestNetworkPlugins/group/auto/Start (112.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-588967 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-588967 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-588967 --output=json --layout=cluster: exit status 2 (309.180832ms)

                                                
                                                
-- stdout --
	{"Name":"pause-588967","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-588967","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-588967 --alsologtostderr -v=5
E1114 14:18:40.162319   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-588967 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-588967 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-588967 --alsologtostderr -v=5: (1.12587469s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.95s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (13.949680944s)
--- PASS: TestPause/serial/VerifyDeletedResources (13.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1114 14:18:58.751694   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 14:19:00.643482   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m29.702414907s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --driver=kvm2 : (37.470067226s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-996869 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-996869 status -o json: exit status 2 (335.524594ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-996869","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-996869
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-996869: (1.239486214s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (108.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m48.714485122s)
--- PASS: TestNetworkPlugins/group/calico/Start (108.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --driver=kvm2 
E1114 14:19:41.604026   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-996869 --no-kubernetes --driver=kvm2 : (51.660173976s)
--- PASS: TestNoKubernetes/serial/Start (51.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z6zmd" [7ca9dc81-c23a-4a57-aa12-5e1a270c1c38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z6zmd" [7ca9dc81-c23a-4a57-aa12-5e1a270c1c38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.017848338s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bj7lv" [164b19f6-efa6-4c34-b6dd-9b3900e08e89] Running
E1114 14:20:26.710171   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.024636725s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d5fr4" [8fb8fa5b-b6ad-42af-8c09-02508b87c1df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d5fr4" [8fb8fa5b-b6ad-42af-8c09-02508b87c1df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.020953087s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-996869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-996869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.826247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-996869
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-996869: (2.273421174s)
--- PASS: TestNoKubernetes/serial/Stop (2.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-996869 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-996869 --driver=kvm2 : (28.413214833s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E1114 14:20:54.394088   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m31.774431965s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (100.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1114 14:21:03.524822   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m40.486211858s)
--- PASS: TestNetworkPlugins/group/false/Start (100.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-996869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-996869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.412973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m4.768013322s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vwl2c" [2db73094-6622-4057-ad82-23320c391744] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023516569s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6s6zl" [2f53d1d2-b430-4be2-bd0d-0d778178938d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6s6zl" [2f53d1d2-b430-4be2-bd0d-0d778178938d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.02091341s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (116.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E1114 14:22:01.798756   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m56.022663057s)
--- PASS: TestNetworkPlugins/group/flannel/Start (116.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zrvc5" [8920ce7e-4be7-4432-acdd-19e7c34298e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:22:28.728364   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zrvc5" [8920ce7e-4be7-4432-acdd-19e7c34298e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.012655297s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-khdd8" [15cbf01e-7b8c-43fe-a060-32ce831a21f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:22:50.836611   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-khdd8" [15cbf01e-7b8c-43fe-a060-32ce831a21f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.017269806s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m31.157527726s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bph75" [ab7d960d-c4fc-491a-9c9e-f6a8078ae334] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bph75" [ab7d960d-c4fc-491a-9c9e-f6a8078ae334] Running
E1114 14:23:19.680501   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.016443373s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (127.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-228003 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m7.792776602s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (127.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-133714 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1114 14:23:47.365737   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-133714 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m47.085242393s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nsb7m" [dda8adf5-836c-4f64-a18c-7cd2c9464dc7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.024605273s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ltmp2" [c70bcae6-0eab-4e9e-817d-d9c64e16b3ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ltmp2" [c70bcae6-0eab-4e9e-817d-d9c64e16b3ab] Running
E1114 14:23:58.751843   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011940652s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-678256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-678256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (1m38.159990957s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qp6rb" [35076c8c-090b-4dea-b59e-9a5f3a077b32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qp6rb" [35076c8c-090b-4dea-b59e-9a5f3a077b32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.010386203s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-277939 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:25:18.149748   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.155036   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.165370   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.185695   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.226021   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.306258   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.467294   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:18.788109   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:19.429057   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:20.709514   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:23.269835   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-277939 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (1m23.044509333s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-228003 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-228003 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b2xrs" [50faa31b-7714-4e14-8550-fa3fa68ae26e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:25:26.298103   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.303437   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.313733   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.334709   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.374998   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.455407   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.616031   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:26.710326   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:25:26.937187   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:27.578314   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:28.390865   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:25:28.858560   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-b2xrs" [50faa31b-7714-4e14-8550-fa3fa68ae26e] Running
E1114 14:25:31.419431   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:25:31.778736   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.014097944s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-228003 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-228003 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1114 14:25:36.540577   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E1114 14:32:33.886471   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 14:32:42.916401   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:32:50.085325   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:32:50.836135   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 14:33:07.760591   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:33:10.601500   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:33:11.627068   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:33:19.680339   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-817895 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:25:59.111906   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-817895 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m12.295712978s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-678256 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ee2d5a7-9e7c-4d2d-a61b-8d4673239c43] Pending
helpers_test.go:344: "busybox" [5ee2d5a7-9e7c-4d2d-a61b-8d4673239c43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ee2d5a7-9e7c-4d2d-a61b-8d4673239c43] Running
E1114 14:26:07.260979   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.036658367s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-678256 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-678256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-678256 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106423333s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-678256 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-678256 --alsologtostderr -v=3
E1114 14:26:17.461463   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.466741   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.476977   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.497326   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.538108   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.618447   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:17.779417   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:18.099977   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:18.740332   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:20.020740   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-678256 --alsologtostderr -v=3: (13.144006779s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277939 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b812357-bb9e-4e00-aea4-ecd7aaa38f77] Pending
E1114 14:26:22.581683   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1b812357-bb9e-4e00-aea4-ecd7aaa38f77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b812357-bb9e-4e00-aea4-ecd7aaa38f77] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.026738776s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-277939 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678256 -n no-preload-678256
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678256 -n no-preload-678256: exit status 7 (88.990257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-678256 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-678256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:26:27.702175   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-678256 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m36.132306651s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-678256 -n no-preload-678256
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-277939 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-277939 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.179307362s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-277939 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-133714 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba0f56e0-0b97-4df2-98dd-8fa71b19f813] Pending
helpers_test.go:344: "busybox" [ba0f56e0-0b97-4df2-98dd-8fa71b19f813] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ba0f56e0-0b97-4df2-98dd-8fa71b19f813] Running
E1114 14:26:37.942943   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:26:40.072297   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.035179233s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-133714 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-277939 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-277939 --alsologtostderr -v=3: (13.147169459s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-133714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-133714 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-133714 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-133714 --alsologtostderr -v=3: (13.16554518s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277939 -n embed-certs-277939
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277939 -n embed-certs-277939: exit status 7 (73.840872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-277939 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (320.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-277939 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:26:48.221758   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-277939 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m19.678884668s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-277939 -n embed-certs-277939
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (320.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-133714 -n old-k8s-version-133714
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-133714 -n old-k8s-version-133714: exit status 7 (90.609979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-133714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (465s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-133714 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1114 14:26:58.423888   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-133714 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m44.724387526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-133714 -n old-k8s-version-133714
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (465.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-817895 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7626f5f0-49fe-4155-a792-25e3f49fe408] Pending
helpers_test.go:344: "busybox" [7626f5f0-49fe-4155-a792-25e3f49fe408] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7626f5f0-49fe-4155-a792-25e3f49fe408] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.05301075s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-817895 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-817895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-817895 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.268082812s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-817895 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-817895 --alsologtostderr -v=3
E1114 14:27:22.400619   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.405948   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.416199   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.436474   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.476754   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.557061   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:22.717726   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:23.038667   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:23.678873   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:24.959795   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:27.519995   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:28.727872   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/ingress-addon-legacy-736725/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-817895 --alsologtostderr -v=3: (13.145174902s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895: exit status 7 (81.167829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-817895 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-817895 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:27:32.641084   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:39.384471   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:27:42.882121   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:27:42.917363   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:42.922706   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:42.932970   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:42.953250   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:42.993622   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:43.074091   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:43.234562   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:43.555599   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:44.196261   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:45.476722   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:48.037208   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:27:50.836327   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/addons-017503/client.crt: no such file or directory
E1114 14:27:53.157519   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:28:01.993195   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:28:03.362480   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:28:03.397745   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:28:10.142345   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:28:11.627295   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.632575   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.642825   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.663119   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.703449   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.783599   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:11.944020   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:12.264707   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:12.905141   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:14.186235   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:16.747114   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:19.680763   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
E1114 14:28:21.867535   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:23.878685   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:28:32.108441   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:44.323525   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:28:47.724698   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:47.729986   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:47.740341   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:47.760693   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:47.801001   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:47.881374   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:48.042428   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:48.363140   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:49.004330   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:50.285317   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:52.589307   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:28:52.845902   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:57.966230   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:28:58.751302   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/functional-912212/client.crt: no such file or directory
E1114 14:29:01.305336   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:29:04.839322   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:29:08.206781   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:29:27.493519   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.498815   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.509089   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.529359   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.569654   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.650016   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:27.810449   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:28.131476   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:28.687394   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:29:28.772037   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:30.053203   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:32.613761   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:33.549556   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:29:37.734947   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:29:47.975462   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:30:06.244495   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
E1114 14:30:08.456137   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:30:09.647897   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:30:18.149670   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:30:23.916613   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:23.921889   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:23.932174   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:23.952468   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:23.992833   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:24.073643   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:24.233947   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:24.554626   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:25.195563   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:26.298115   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:30:26.476609   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:26.710083   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
E1114 14:30:26.760292   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/false-228003/client.crt: no such file or directory
E1114 14:30:29.037274   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:34.157541   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:44.398653   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:30:45.833898   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/auto-228003/client.crt: no such file or directory
E1114 14:30:49.416948   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
E1114 14:30:53.983256   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kindnet-228003/client.crt: no such file or directory
E1114 14:30:55.470670   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
E1114 14:31:04.878927   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:31:17.460927   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:31:31.568379   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
E1114 14:31:45.146382   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/calico-228003/client.crt: no such file or directory
E1114 14:31:45.839634   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/kubenet-228003/client.crt: no such file or directory
E1114 14:31:49.754624   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/skaffold-754913/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-817895 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (5m49.872643998s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2kgr" [6bd3930e-5456-48ea-965a-8df8ffe3240b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2kgr" [6bd3930e-5456-48ea-965a-8df8ffe3240b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.022369619s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v6h7h" [fbcc38f0-7e9a-4eff-941b-93d1f99823c9] Running
E1114 14:32:11.337371   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/bridge-228003/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022052992s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v6h7h" [fbcc38f0-7e9a-4eff-941b-93d1f99823c9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014988994s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-277939 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-277939 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-277939 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277939 -n embed-certs-277939
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277939 -n embed-certs-277939: exit status 2 (276.460144ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-277939 -n embed-certs-277939
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-277939 -n embed-certs-277939: exit status 2 (283.282625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-277939 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-277939 -n embed-certs-277939
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-277939 -n embed-certs-277939
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-k2kgr" [6bd3930e-5456-48ea-965a-8df8ffe3240b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013811026s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-678256 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-981589 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1114 14:32:22.401110   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/custom-flannel-228003/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-981589 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m16.998450204s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (77.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-678256 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-678256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678256 -n no-preload-678256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678256 -n no-preload-678256: exit status 2 (284.23822ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678256 -n no-preload-678256
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678256 -n no-preload-678256: exit status 2 (266.369295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-678256 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-678256 -n no-preload-678256
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-678256 -n no-preload-678256
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r4f9j" [7c4f39d2-2d98-4407-acb1-f76a932a9eb7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r4f9j" [7c4f39d2-2d98-4407-acb1-f76a932a9eb7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.020883816s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (22.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-981589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-981589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102686466s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-981589 --alsologtostderr -v=3
E1114 14:33:39.310995   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/enable-default-cni-228003/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-981589 --alsologtostderr -v=3: (8.138364593s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-r4f9j" [7c4f39d2-2d98-4407-acb1-f76a932a9eb7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012688309s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-817895 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1114 14:33:47.724339   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/flannel-228003/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-981589 -n newest-cni-981589
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-981589 -n newest-cni-981589: exit status 7 (84.88106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-981589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-981589 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-981589 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (46.466120352s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-981589 -n newest-cni-981589
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-817895 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-817895 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895: exit status 2 (274.661661ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895: exit status 2 (265.013507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-817895 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-817895 -n default-k8s-diff-port-817895
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-981589 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-981589 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-981589 -n newest-cni-981589
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-981589 -n newest-cni-981589: exit status 2 (252.370831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-981589 -n newest-cni-981589
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-981589 -n newest-cni-981589: exit status 2 (251.181253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-981589 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-981589 -n newest-cni-981589
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-981589 -n newest-cni-981589
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hdv95" [e8d1ce0a-c66b-4d84-864a-dd71c997a2aa] Running
E1114 14:34:42.726713   13238 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-6041/.minikube/profiles/gvisor-480871/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016721216s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hdv95" [e8d1ce0a-c66b-4d84-864a-dd71c997a2aa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010325394s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-133714 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-133714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-133714 -n old-k8s-version-133714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-133714 -n old-k8s-version-133714: exit status 2 (256.55075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-133714 -n old-k8s-version-133714
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-133714 -n old-k8s-version-133714: exit status 2 (250.178247ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-133714 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-133714 -n old-k8s-version-133714
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-133714 -n old-k8s-version-133714
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-228003 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-228003" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-228003

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-228003" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228003"

                                                
                                                
----------------------- debugLogs end: cilium-228003 [took: 4.11688191s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-228003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-228003
--- SKIP: TestNetworkPlugins/group/cilium (4.32s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-494762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-494762
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard