Test Report: KVM_Linux 17375

                    
                      48ead6827c858d28720e0f0a5b94c9bf64850269:2023-10-09:31379
                    
                

Test fail (3/321)

Order failed test Duration
222 TestMultiNode/serial/RestartMultiNode 90.09
234 TestRunningBinaryUpgrade 15.17
335 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.31
x
+
TestMultiNode/serial/RestartMultiNode (90.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-921619 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-921619 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m27.726937423s)

                                                
                                                
-- stdout --
	* [multinode-921619] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node multinode-921619 in cluster multinode-921619
	* Restarting existing kvm2 VM for "multinode-921619" ...
	* Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Starting worker node multinode-921619-m02 in cluster multinode-921619
	* Restarting existing kvm2 VM for "multinode-921619-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.167
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:19:42.554319  102501 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:19:42.554438  102501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.554447  102501 out.go:309] Setting ErrFile to fd 2...
	I1009 23:19:42.554452  102501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.554694  102501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:19:42.555224  102501 out.go:303] Setting JSON to false
	I1009 23:19:42.556124  102501 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10930,"bootTime":1696882653,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 23:19:42.556185  102501 start.go:138] virtualization: kvm guest
	I1009 23:19:42.558589  102501 out.go:177] * [multinode-921619] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 23:19:42.560021  102501 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:19:42.561515  102501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:19:42.560032  102501 notify.go:220] Checking for updates...
	I1009 23:19:42.564258  102501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:19:42.565674  102501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:19:42.567066  102501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 23:19:42.568463  102501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:19:42.570393  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:19:42.570824  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.570907  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.585661  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I1009 23:19:42.586059  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.586668  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.586693  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.587078  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.587290  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.587624  102501 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:19:42.588013  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.588056  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.601943  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1009 23:19:42.602285  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.602765  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.602786  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.603047  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.603250  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.637234  102501 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 23:19:42.638613  102501 start.go:298] selected driver: kvm2
	I1009 23:19:42.638626  102501 start.go:902] validating driver "kvm2" against &{Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:19:42.638763  102501 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:19:42.639070  102501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:19:42.639133  102501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17375-78415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 23:19:42.653262  102501 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1009 23:19:42.654004  102501 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:19:42.654068  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:19:42.654078  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:19:42.654090  102501 start_flags.go:323] config:
	{Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fal
se nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:19:42.654293  102501 iso.go:125] acquiring lock: {Name:mk8f0545fb1f7801479f5eb65fbe7d8f303a99cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:19:42.656913  102501 out.go:177] * Starting control plane node multinode-921619 in cluster multinode-921619
	I1009 23:19:42.658142  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:19:42.658176  102501 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1009 23:19:42.658184  102501 cache.go:57] Caching tarball of preloaded images
	I1009 23:19:42.658274  102501 preload.go:174] Found /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1009 23:19:42.658285  102501 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1009 23:19:42.658393  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:19:42.658592  102501 start.go:365] acquiring machines lock for multinode-921619: {Name:mk4d06451f08f4d0dfbc191a7a07492b6e7c9c1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 23:19:42.658633  102501 start.go:369] acquired machines lock for "multinode-921619" in 22.028µs
	I1009 23:19:42.658645  102501 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:19:42.658652  102501 fix.go:54] fixHost starting: 
	I1009 23:19:42.658915  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.658948  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.672648  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I1009 23:19:42.673039  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.673480  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.673502  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.673799  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.673993  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.674141  102501 main.go:141] libmachine: (multinode-921619) Calling .GetState
	I1009 23:19:42.676000  102501 fix.go:102] recreateIfNeeded on multinode-921619: state=Stopped err=<nil>
	I1009 23:19:42.676021  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	W1009 23:19:42.676184  102501 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:19:42.678714  102501 out.go:177] * Restarting existing kvm2 VM for "multinode-921619" ...
	I1009 23:19:42.680025  102501 main.go:141] libmachine: (multinode-921619) Calling .Start
	I1009 23:19:42.680203  102501 main.go:141] libmachine: (multinode-921619) Ensuring networks are active...
	I1009 23:19:42.681001  102501 main.go:141] libmachine: (multinode-921619) Ensuring network default is active
	I1009 23:19:42.681449  102501 main.go:141] libmachine: (multinode-921619) Ensuring network mk-multinode-921619 is active
	I1009 23:19:42.681823  102501 main.go:141] libmachine: (multinode-921619) Getting domain xml...
	I1009 23:19:42.682587  102501 main.go:141] libmachine: (multinode-921619) Creating domain...
	I1009 23:19:43.899709  102501 main.go:141] libmachine: (multinode-921619) Waiting to get IP...
	I1009 23:19:43.900830  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:43.901318  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:43.901439  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:43.901326  102536 retry.go:31] will retry after 237.405822ms: waiting for machine to come up
	I1009 23:19:44.140909  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.141369  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.141395  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.141285  102536 retry.go:31] will retry after 330.20986ms: waiting for machine to come up
	I1009 23:19:44.472830  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.473397  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.473498  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.473334  102536 retry.go:31] will retry after 424.010882ms: waiting for machine to come up
	I1009 23:19:44.898955  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.899336  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.899367  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.899285  102536 retry.go:31] will retry after 485.273155ms: waiting for machine to come up
	I1009 23:19:45.386042  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:45.386267  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:45.386298  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:45.386223  102536 retry.go:31] will retry after 587.068913ms: waiting for machine to come up
	I1009 23:19:45.975115  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:45.975524  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:45.975555  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:45.975475  102536 retry.go:31] will retry after 594.885578ms: waiting for machine to come up
	I1009 23:19:46.572228  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:46.572710  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:46.572732  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:46.572648  102536 retry.go:31] will retry after 896.005691ms: waiting for machine to come up
	I1009 23:19:47.470886  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:47.471343  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:47.471370  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:47.471299  102536 retry.go:31] will retry after 1.167441753s: waiting for machine to come up
	I1009 23:19:48.640221  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:48.640797  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:48.640828  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:48.640750  102536 retry.go:31] will retry after 1.388777428s: waiting for machine to come up
	I1009 23:19:50.031274  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:50.031649  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:50.031693  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:50.031576  102536 retry.go:31] will retry after 1.747281603s: waiting for machine to come up
	I1009 23:19:51.781705  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:51.782185  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:51.782218  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:51.782122  102536 retry.go:31] will retry after 2.469919209s: waiting for machine to come up
	I1009 23:19:54.253897  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:54.254261  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:54.254291  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:54.254238  102536 retry.go:31] will retry after 2.229572497s: waiting for machine to come up
	I1009 23:19:56.486729  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:56.487104  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:56.487122  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:56.487070  102536 retry.go:31] will retry after 3.115495801s: waiting for machine to come up
	I1009 23:19:59.604928  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:59.605366  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:59.605390  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:59.605317  102536 retry.go:31] will retry after 3.442831938s: waiting for machine to come up
	I1009 23:20:03.049586  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.050068  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has current primary IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.050095  102501 main.go:141] libmachine: (multinode-921619) Found IP for machine: 192.168.39.167
	I1009 23:20:03.050107  102501 main.go:141] libmachine: (multinode-921619) Reserving static IP address...
	I1009 23:20:03.050537  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "multinode-921619", mac: "52:54:00:65:2b:27", ip: "192.168.39.167"} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.050566  102501 main.go:141] libmachine: (multinode-921619) Reserved static IP address: 192.168.39.167
	I1009 23:20:03.050580  102501 main.go:141] libmachine: (multinode-921619) DBG | skip adding static IP to network mk-multinode-921619 - found existing host DHCP lease matching {name: "multinode-921619", mac: "52:54:00:65:2b:27", ip: "192.168.39.167"}
	I1009 23:20:03.050595  102501 main.go:141] libmachine: (multinode-921619) DBG | Getting to WaitForSSH function...
	I1009 23:20:03.050616  102501 main.go:141] libmachine: (multinode-921619) Waiting for SSH to be available...
	I1009 23:20:03.052668  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.052975  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.052997  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.053105  102501 main.go:141] libmachine: (multinode-921619) DBG | Using SSH client type: external
	I1009 23:20:03.053133  102501 main.go:141] libmachine: (multinode-921619) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa (-rw-------)
	I1009 23:20:03.053153  102501 main.go:141] libmachine: (multinode-921619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:20:03.053164  102501 main.go:141] libmachine: (multinode-921619) DBG | About to run SSH command:
	I1009 23:20:03.053179  102501 main.go:141] libmachine: (multinode-921619) DBG | exit 0
	I1009 23:20:03.142014  102501 main.go:141] libmachine: (multinode-921619) DBG | SSH cmd err, output: <nil>: 
	I1009 23:20:03.142377  102501 main.go:141] libmachine: (multinode-921619) Calling .GetConfigRaw
	I1009 23:20:03.143029  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:03.145626  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.145990  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.146024  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.146294  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:03.146512  102501 machine.go:88] provisioning docker machine ...
	I1009 23:20:03.146531  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:03.146757  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.146915  102501 buildroot.go:166] provisioning hostname "multinode-921619"
	I1009 23:20:03.146930  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.147080  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.149243  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.149566  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.149606  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.149676  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.149854  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.150025  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.150145  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.150273  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.150618  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.150629  102501 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-921619 && echo "multinode-921619" | sudo tee /etc/hostname
	I1009 23:20:03.277603  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-921619
	
	I1009 23:20:03.277638  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.280400  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.280747  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.280790  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.280946  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.281156  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.281346  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.281498  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.281671  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.281998  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.282032  102501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-921619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-921619/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-921619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:20:03.405672  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:20:03.405707  102501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17375-78415/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-78415/.minikube}
	I1009 23:20:03.405749  102501 buildroot.go:174] setting up certificates
	I1009 23:20:03.405760  102501 provision.go:83] configureAuth start
	I1009 23:20:03.405779  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.406085  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:03.408851  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.409320  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.409345  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.409568  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.411602  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.411933  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.411958  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.412028  102501 provision.go:138] copyHostCerts
	I1009 23:20:03.412072  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:20:03.412119  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem, removing ...
	I1009 23:20:03.412133  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:20:03.412212  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem (1082 bytes)
	I1009 23:20:03.412334  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:20:03.412371  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem, removing ...
	I1009 23:20:03.412381  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:20:03.412422  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem (1123 bytes)
	I1009 23:20:03.412526  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:20:03.412554  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem, removing ...
	I1009 23:20:03.412566  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:20:03.412601  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem (1679 bytes)
	I1009 23:20:03.412678  102501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem org=jenkins.multinode-921619 san=[192.168.39.167 192.168.39.167 localhost 127.0.0.1 minikube multinode-921619]
	I1009 23:20:03.559867  102501 provision.go:172] copyRemoteCerts
	I1009 23:20:03.559927  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:20:03.559953  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.563117  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.563509  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.563535  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.563718  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.563915  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.564079  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.564215  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:03.656572  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:20:03.656659  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:20:03.678392  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:20:03.678450  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 23:20:03.700167  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:20:03.700229  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 23:20:03.721303  102501 provision.go:86] duration metric: configureAuth took 315.526073ms
	I1009 23:20:03.721327  102501 buildroot.go:189] setting minikube options for container-runtime
	I1009 23:20:03.721538  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:03.721562  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:03.721848  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.724544  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.724947  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.724981  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.725099  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.725327  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.725477  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.725594  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.725754  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.726050  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.726062  102501 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 23:20:03.843926  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 23:20:03.843954  102501 buildroot.go:70] root file system type: tmpfs
	I1009 23:20:03.844107  102501 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 23:20:03.844149  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.847133  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.847492  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.847529  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.847708  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.847909  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.848085  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.848230  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.848385  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.848727  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.848791  102501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 23:20:03.980374  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 23:20:03.980448  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.983127  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.983489  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.983522  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.983701  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.983874  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.984045  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.984160  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.984295  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.984673  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.984693  102501 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 23:20:04.899192  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 23:20:04.899224  102501 machine.go:91] provisioned docker machine in 1.752695342s
	I1009 23:20:04.899245  102501 start.go:300] post-start starting for "multinode-921619" (driver="kvm2")
	I1009 23:20:04.899256  102501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:20:04.899277  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:04.899612  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:20:04.899653  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:04.902154  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:04.902555  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:04.902583  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:04.902771  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:04.902952  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:04.903125  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:04.903226  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:04.992146  102501 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:20:04.996385  102501 command_runner.go:130] > NAME=Buildroot
	I1009 23:20:04.996406  102501 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1009 23:20:04.996421  102501 command_runner.go:130] > ID=buildroot
	I1009 23:20:04.996429  102501 command_runner.go:130] > VERSION_ID=2021.02.12
	I1009 23:20:04.996437  102501 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1009 23:20:04.996508  102501 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 23:20:04.996532  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/addons for local assets ...
	I1009 23:20:04.996599  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/files for local assets ...
	I1009 23:20:04.996698  102501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> 856012.pem in /etc/ssl/certs
	I1009 23:20:04.996711  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /etc/ssl/certs/856012.pem
	I1009 23:20:04.996824  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:20:05.004858  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:20:05.027499  102501 start.go:303] post-start completed in 128.238762ms
	I1009 23:20:05.027519  102501 fix.go:56] fixHost completed within 22.368865879s
	I1009 23:20:05.027539  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.030028  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.030398  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.030431  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.030597  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.030795  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.030927  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.031051  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.031206  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:05.031517  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:05.031533  102501 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 23:20:05.147008  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696893605.096589079
	
	I1009 23:20:05.147032  102501 fix.go:206] guest clock: 1696893605.096589079
	I1009 23:20:05.147040  102501 fix.go:219] Guest: 2023-10-09 23:20:05.096589079 +0000 UTC Remote: 2023-10-09 23:20:05.027522172 +0000 UTC m=+22.522167554 (delta=69.066907ms)
	I1009 23:20:05.147063  102501 fix.go:190] guest clock delta is within tolerance: 69.066907ms
	I1009 23:20:05.147070  102501 start.go:83] releasing machines lock for "multinode-921619", held for 22.488427405s
	I1009 23:20:05.147105  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.147388  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:05.149888  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.150249  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.150280  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.150485  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.150954  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.151101  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.151199  102501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:20:05.151238  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.151321  102501 ssh_runner.go:195] Run: cat /version.json
	I1009 23:20:05.151346  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.154023  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154169  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154415  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.154445  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154490  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.154528  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154614  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.154725  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.154810  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.154907  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.154984  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.155001  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.155094  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:05.155192  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:05.260508  102501 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:20:05.261108  102501 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1009 23:20:05.261272  102501 ssh_runner.go:195] Run: systemctl --version
	I1009 23:20:05.266667  102501 command_runner.go:130] > systemd 247 (247)
	I1009 23:20:05.266703  102501 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1009 23:20:05.266772  102501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:20:05.271860  102501 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 23:20:05.271969  102501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 23:20:05.272037  102501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:20:05.285541  102501 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1009 23:20:05.285571  102501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:20:05.285583  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:20:05.285708  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:20:05.301938  102501 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1009 23:20:05.302014  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 23:20:05.311927  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 23:20:05.321797  102501 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 23:20:05.321864  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 23:20:05.331819  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:20:05.341858  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 23:20:05.351719  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:20:05.361423  102501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:20:05.371820  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 23:20:05.381532  102501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:20:05.390418  102501 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:20:05.390496  102501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:20:05.399122  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:05.500931  102501 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 23:20:05.519011  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:20:05.519094  102501 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 23:20:05.531353  102501 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1009 23:20:05.531368  102501 command_runner.go:130] > [Unit]
	I1009 23:20:05.531374  102501 command_runner.go:130] > Description=Docker Application Container Engine
	I1009 23:20:05.531379  102501 command_runner.go:130] > Documentation=https://docs.docker.com
	I1009 23:20:05.531385  102501 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1009 23:20:05.531390  102501 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1009 23:20:05.531403  102501 command_runner.go:130] > StartLimitBurst=3
	I1009 23:20:05.531408  102501 command_runner.go:130] > StartLimitIntervalSec=60
	I1009 23:20:05.531412  102501 command_runner.go:130] > [Service]
	I1009 23:20:05.531416  102501 command_runner.go:130] > Type=notify
	I1009 23:20:05.531424  102501 command_runner.go:130] > Restart=on-failure
	I1009 23:20:05.531439  102501 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1009 23:20:05.531460  102501 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1009 23:20:05.531472  102501 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1009 23:20:05.531486  102501 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1009 23:20:05.531497  102501 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1009 23:20:05.531510  102501 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1009 23:20:05.531523  102501 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1009 23:20:05.531543  102501 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1009 23:20:05.531558  102501 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1009 23:20:05.531564  102501 command_runner.go:130] > ExecStart=
	I1009 23:20:05.531590  102501 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1009 23:20:05.531602  102501 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1009 23:20:05.531609  102501 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1009 23:20:05.531615  102501 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1009 23:20:05.531619  102501 command_runner.go:130] > LimitNOFILE=infinity
	I1009 23:20:05.531623  102501 command_runner.go:130] > LimitNPROC=infinity
	I1009 23:20:05.531627  102501 command_runner.go:130] > LimitCORE=infinity
	I1009 23:20:05.531632  102501 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1009 23:20:05.531644  102501 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1009 23:20:05.531651  102501 command_runner.go:130] > TasksMax=infinity
	I1009 23:20:05.531658  102501 command_runner.go:130] > TimeoutStartSec=0
	I1009 23:20:05.531670  102501 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1009 23:20:05.531680  102501 command_runner.go:130] > Delegate=yes
	I1009 23:20:05.531690  102501 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1009 23:20:05.531699  102501 command_runner.go:130] > KillMode=process
	I1009 23:20:05.531704  102501 command_runner.go:130] > [Install]
	I1009 23:20:05.531716  102501 command_runner.go:130] > WantedBy=multi-user.target
	I1009 23:20:05.531793  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:20:05.552369  102501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:20:05.569205  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:20:05.580862  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:20:05.592206  102501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 23:20:05.622389  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:20:05.634651  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:20:05.651364  102501 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1009 23:20:05.651450  102501 ssh_runner.go:195] Run: which cri-dockerd
	I1009 23:20:05.654957  102501 command_runner.go:130] > /usr/bin/cri-dockerd
	I1009 23:20:05.655078  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 23:20:05.663913  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 23:20:05.679609  102501 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 23:20:05.782471  102501 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 23:20:05.890512  102501 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 23:20:05.890657  102501 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 23:20:05.907250  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:06.008433  102501 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:20:07.512425  102501 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.503952293s)
	I1009 23:20:07.512500  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:20:07.624629  102501 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 23:20:07.733434  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:20:07.845901  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:07.958297  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 23:20:07.974336  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:08.079485  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1009 23:20:08.157244  102501 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 23:20:08.157325  102501 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 23:20:08.163200  102501 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1009 23:20:08.163230  102501 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 23:20:08.163241  102501 command_runner.go:130] > Device: 16h/22d	Inode: 894         Links: 1
	I1009 23:20:08.163250  102501 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1009 23:20:08.163257  102501 command_runner.go:130] > Access: 2023-10-09 23:20:08.043785686 +0000
	I1009 23:20:08.163261  102501 command_runner.go:130] > Modify: 2023-10-09 23:20:08.043785686 +0000
	I1009 23:20:08.163267  102501 command_runner.go:130] > Change: 2023-10-09 23:20:08.045785686 +0000
	I1009 23:20:08.163270  102501 command_runner.go:130] >  Birth: -
	I1009 23:20:08.163644  102501 start.go:540] Will wait 60s for crictl version
	I1009 23:20:08.163696  102501 ssh_runner.go:195] Run: which crictl
	I1009 23:20:08.168305  102501 command_runner.go:130] > /usr/bin/crictl
	I1009 23:20:08.168476  102501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:20:08.220203  102501 command_runner.go:130] > Version:  0.1.0
	I1009 23:20:08.220225  102501 command_runner.go:130] > RuntimeName:  docker
	I1009 23:20:08.220230  102501 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1009 23:20:08.220235  102501 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 23:20:08.221898  102501 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1009 23:20:08.221968  102501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:20:08.249007  102501 command_runner.go:130] > 24.0.6
	I1009 23:20:08.250211  102501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:20:08.275194  102501 command_runner.go:130] > 24.0.6
	I1009 23:20:08.277983  102501 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1009 23:20:08.278046  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:08.280705  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:08.281134  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:08.281172  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:08.281378  102501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 23:20:08.285404  102501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:08.298585  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:20:08.298643  102501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:20:08.316856  102501 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1009 23:20:08.316880  102501 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1009 23:20:08.316889  102501 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1009 23:20:08.316898  102501 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1009 23:20:08.316906  102501 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1009 23:20:08.316913  102501 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1009 23:20:08.316922  102501 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1009 23:20:08.316933  102501 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1009 23:20:08.316943  102501 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:20:08.316950  102501 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1009 23:20:08.317734  102501 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1009 23:20:08.317760  102501 docker.go:619] Images already preloaded, skipping extraction
	I1009 23:20:08.317824  102501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:20:08.337694  102501 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1009 23:20:08.337721  102501 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1009 23:20:08.337730  102501 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1009 23:20:08.337739  102501 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1009 23:20:08.337746  102501 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1009 23:20:08.337754  102501 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1009 23:20:08.337763  102501 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1009 23:20:08.337770  102501 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1009 23:20:08.337778  102501 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:20:08.337790  102501 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1009 23:20:08.337829  102501 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1009 23:20:08.337851  102501 cache_images.go:84] Images are preloaded, skipping loading
	I1009 23:20:08.337910  102501 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 23:20:08.363645  102501 command_runner.go:130] > cgroupfs
	I1009 23:20:08.364850  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:20:08.364871  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:08.364897  102501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:20:08.364933  102501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.167 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-921619 NodeName:multinode-921619 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:20:08.365112  102501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-921619"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:20:08.365201  102501 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-921619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:20:08.365259  102501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:20:08.374426  102501 command_runner.go:130] > kubeadm
	I1009 23:20:08.374443  102501 command_runner.go:130] > kubectl
	I1009 23:20:08.374448  102501 command_runner.go:130] > kubelet
	I1009 23:20:08.374662  102501 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:20:08.374745  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:20:08.382881  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1009 23:20:08.398895  102501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:20:08.414664  102501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1009 23:20:08.431261  102501 ssh_runner.go:195] Run: grep 192.168.39.167	control-plane.minikube.internal$ /etc/hosts
	I1009 23:20:08.434954  102501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:08.446965  102501 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619 for IP: 192.168.39.167
	I1009 23:20:08.446999  102501 certs.go:190] acquiring lock for shared ca certs: {Name:mke2558e764208d6103dc9316e1963152570f27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:08.447139  102501 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key
	I1009 23:20:08.447183  102501 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key
	I1009 23:20:08.447255  102501 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key
	I1009 23:20:08.447302  102501 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key.5fe8596d
	I1009 23:20:08.447343  102501 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key
	I1009 23:20:08.447354  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 23:20:08.447367  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 23:20:08.447380  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 23:20:08.447392  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 23:20:08.447411  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 23:20:08.447424  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 23:20:08.447435  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 23:20:08.447447  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 23:20:08.447493  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem (1338 bytes)
	W1009 23:20:08.447522  102501 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601_empty.pem, impossibly tiny 0 bytes
	I1009 23:20:08.447532  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:20:08.447557  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem (1082 bytes)
	I1009 23:20:08.447579  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:20:08.447600  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem (1679 bytes)
	I1009 23:20:08.447640  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:20:08.447676  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.447690  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.447702  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem -> /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.448411  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:20:08.471339  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 23:20:08.495014  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:20:08.518293  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:20:08.541374  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:20:08.564198  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 23:20:08.587349  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:20:08.610562  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 23:20:08.633178  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /usr/share/ca-certificates/856012.pem (1708 bytes)
	I1009 23:20:08.655844  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:20:08.678896  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem --> /usr/share/ca-certificates/85601.pem (1338 bytes)
	I1009 23:20:08.701523  102501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:20:08.717647  102501 ssh_runner.go:195] Run: openssl version
	I1009 23:20:08.722843  102501 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1009 23:20:08.723202  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/856012.pem && ln -fs /usr/share/ca-certificates/856012.pem /etc/ssl/certs/856012.pem"
	I1009 23:20:08.732707  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737331  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 23:00 /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737356  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:00 /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737395  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.742994  102501 command_runner.go:130] > 3ec20f2e
	I1009 23:20:08.743060  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/856012.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:20:08.752377  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:20:08.761505  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.765802  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.765993  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.766049  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.771239  102501 command_runner.go:130] > b5213941
	I1009 23:20:08.771294  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:20:08.780391  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/85601.pem && ln -fs /usr/share/ca-certificates/85601.pem /etc/ssl/certs/85601.pem"
	I1009 23:20:08.789372  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793515  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 23:00 /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793728  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:00 /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793767  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.799268  102501 command_runner.go:130] > 51391683
	I1009 23:20:08.799330  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/85601.pem /etc/ssl/certs/51391683.0"
	I1009 23:20:08.808528  102501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:20:08.812734  102501 command_runner.go:130] > ca.crt
	I1009 23:20:08.812758  102501 command_runner.go:130] > ca.key
	I1009 23:20:08.812767  102501 command_runner.go:130] > healthcheck-client.crt
	I1009 23:20:08.812774  102501 command_runner.go:130] > healthcheck-client.key
	I1009 23:20:08.812781  102501 command_runner.go:130] > peer.crt
	I1009 23:20:08.812795  102501 command_runner.go:130] > peer.key
	I1009 23:20:08.812805  102501 command_runner.go:130] > server.crt
	I1009 23:20:08.812811  102501 command_runner.go:130] > server.key
	I1009 23:20:08.812865  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 23:20:08.818235  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.818502  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 23:20:08.823810  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.823867  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 23:20:08.829311  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.829363  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 23:20:08.834768  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.834881  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 23:20:08.840297  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.840408  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 23:20:08.845708  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.845923  102501 kubeadm.go:404] StartCluster: {Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:20:08.846092  102501 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 23:20:08.865084  102501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:20:08.874606  102501 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 23:20:08.874632  102501 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 23:20:08.874640  102501 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 23:20:08.874646  102501 command_runner.go:130] > member
	I1009 23:20:08.874782  102501 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1009 23:20:08.874797  102501 kubeadm.go:636] restartCluster start
	I1009 23:20:08.874847  102501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 23:20:08.883134  102501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:08.883547  102501 kubeconfig.go:135] verify returned: extract IP: "multinode-921619" does not appear in /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:08.883643  102501 kubeconfig.go:146] "multinode-921619" context is missing from /home/jenkins/minikube-integration/17375-78415/kubeconfig - will repair!
	I1009 23:20:08.883929  102501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/kubeconfig: {Name:mkee061910efe3fb616ee347e2e0b1635aa74f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:08.884285  102501 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:08.884476  102501 kapi.go:59] client config for multinode-921619: &rest.Config{Host:"https://192.168.39.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key", CAFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c11c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:08.885034  102501 cert_rotation.go:137] Starting client certificate rotation controller
	I1009 23:20:08.885230  102501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 23:20:08.893659  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:08.893722  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:08.904105  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:08.904125  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:08.904163  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:08.913942  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:09.414697  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:09.414781  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:09.426226  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:09.914885  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:09.914970  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:09.926398  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:10.415011  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:10.415087  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:10.426306  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:10.914953  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:10.915058  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:10.926103  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:11.414639  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:11.414715  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:11.426162  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:11.914754  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:11.914836  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:11.925929  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:12.414630  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:12.414711  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:12.426487  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:12.914151  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:12.914267  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:12.925531  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:13.414101  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:13.414226  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:13.425312  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:13.914840  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:13.914911  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:13.926078  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:14.414754  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:14.414833  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:14.426130  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:14.914766  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:14.914846  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:14.926607  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:15.414104  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:15.414170  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:15.425651  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:15.914234  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:15.914310  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:15.927045  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:16.414690  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:16.414793  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:16.426039  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:16.914658  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:16.914742  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:16.926340  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:17.414980  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:17.415089  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:17.426566  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:17.914826  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:17.914898  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:17.926293  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:18.414769  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:18.414869  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:18.426110  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:18.894695  102501 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1009 23:20:18.894736  102501 kubeadm.go:1128] stopping kube-system containers ...
	I1009 23:20:18.894817  102501 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 23:20:18.920004  102501 command_runner.go:130] > 453f6dce464b
	I1009 23:20:18.920026  102501 command_runner.go:130] > 88d988a42798
	I1009 23:20:18.920033  102501 command_runner.go:130] > af05e798f2ed
	I1009 23:20:18.920040  102501 command_runner.go:130] > 865e9ceee649
	I1009 23:20:18.920051  102501 command_runner.go:130] > fbb07f20fa16
	I1009 23:20:18.920057  102501 command_runner.go:130] > ce86ce17dc12
	I1009 23:20:18.920063  102501 command_runner.go:130] > 96f26fc70c3e
	I1009 23:20:18.920070  102501 command_runner.go:130] > 3f140f1b444f
	I1009 23:20:18.920079  102501 command_runner.go:130] > aa6841202730
	I1009 23:20:18.920085  102501 command_runner.go:130] > 2c47ae8aed1a
	I1009 23:20:18.920090  102501 command_runner.go:130] > cb0e5b797b8d
	I1009 23:20:18.920097  102501 command_runner.go:130] > ac1bbc7d4311
	I1009 23:20:18.920109  102501 command_runner.go:130] > 3e987851ad86
	I1009 23:20:18.920115  102501 command_runner.go:130] > 7ca4344ccad3
	I1009 23:20:18.920123  102501 command_runner.go:130] > 3b09d0826e99
	I1009 23:20:18.920132  102501 command_runner.go:130] > 665cbd4fad77
	I1009 23:20:18.920137  102501 command_runner.go:130] > 6d2453b4ccbd
	I1009 23:20:18.920142  102501 command_runner.go:130] > 225f665e1777
	I1009 23:20:18.920146  102501 command_runner.go:130] > b387ab7d9878
	I1009 23:20:18.920153  102501 command_runner.go:130] > 84496d0bb2a9
	I1009 23:20:18.920157  102501 command_runner.go:130] > 3c097ec42a79
	I1009 23:20:18.920160  102501 command_runner.go:130] > acc138948996
	I1009 23:20:18.920164  102501 command_runner.go:130] > ac407d90f64c
	I1009 23:20:18.920170  102501 command_runner.go:130] > 66ffe93c503b
	I1009 23:20:18.920173  102501 command_runner.go:130] > 28ea40be486c
	I1009 23:20:18.920177  102501 command_runner.go:130] > 4a9e9455ca75
	I1009 23:20:18.920185  102501 command_runner.go:130] > 866b3c026498
	I1009 23:20:18.920192  102501 command_runner.go:130] > 6807030f028b
	I1009 23:20:18.920196  102501 command_runner.go:130] > 1f3e1b00829d
	I1009 23:20:18.920200  102501 command_runner.go:130] > 8f01da7e8d17
	I1009 23:20:18.920203  102501 command_runner.go:130] > 41105b4ddb01
	I1009 23:20:18.920209  102501 command_runner.go:130] > 7ed3b793352f
	I1009 23:20:18.920232  102501 docker.go:464] Stopping containers: [453f6dce464b 88d988a42798 af05e798f2ed 865e9ceee649 fbb07f20fa16 ce86ce17dc12 96f26fc70c3e 3f140f1b444f aa6841202730 2c47ae8aed1a cb0e5b797b8d ac1bbc7d4311 3e987851ad86 7ca4344ccad3 3b09d0826e99 665cbd4fad77 6d2453b4ccbd 225f665e1777 b387ab7d9878 84496d0bb2a9 3c097ec42a79 acc138948996 ac407d90f64c 66ffe93c503b 28ea40be486c 4a9e9455ca75 866b3c026498 6807030f028b 1f3e1b00829d 8f01da7e8d17 41105b4ddb01 7ed3b793352f]
	I1009 23:20:18.920290  102501 ssh_runner.go:195] Run: docker stop 453f6dce464b 88d988a42798 af05e798f2ed 865e9ceee649 fbb07f20fa16 ce86ce17dc12 96f26fc70c3e 3f140f1b444f aa6841202730 2c47ae8aed1a cb0e5b797b8d ac1bbc7d4311 3e987851ad86 7ca4344ccad3 3b09d0826e99 665cbd4fad77 6d2453b4ccbd 225f665e1777 b387ab7d9878 84496d0bb2a9 3c097ec42a79 acc138948996 ac407d90f64c 66ffe93c503b 28ea40be486c 4a9e9455ca75 866b3c026498 6807030f028b 1f3e1b00829d 8f01da7e8d17 41105b4ddb01 7ed3b793352f
	I1009 23:20:18.941155  102501 command_runner.go:130] > 453f6dce464b
	I1009 23:20:18.941181  102501 command_runner.go:130] > 88d988a42798
	I1009 23:20:18.941188  102501 command_runner.go:130] > af05e798f2ed
	I1009 23:20:18.941193  102501 command_runner.go:130] > 865e9ceee649
	I1009 23:20:18.941197  102501 command_runner.go:130] > fbb07f20fa16
	I1009 23:20:18.941201  102501 command_runner.go:130] > ce86ce17dc12
	I1009 23:20:18.941205  102501 command_runner.go:130] > 96f26fc70c3e
	I1009 23:20:18.941208  102501 command_runner.go:130] > 3f140f1b444f
	I1009 23:20:18.941212  102501 command_runner.go:130] > aa6841202730
	I1009 23:20:18.941229  102501 command_runner.go:130] > 2c47ae8aed1a
	I1009 23:20:18.941233  102501 command_runner.go:130] > cb0e5b797b8d
	I1009 23:20:18.941237  102501 command_runner.go:130] > ac1bbc7d4311
	I1009 23:20:18.941240  102501 command_runner.go:130] > 3e987851ad86
	I1009 23:20:18.941244  102501 command_runner.go:130] > 7ca4344ccad3
	I1009 23:20:18.941248  102501 command_runner.go:130] > 3b09d0826e99
	I1009 23:20:18.941252  102501 command_runner.go:130] > 665cbd4fad77
	I1009 23:20:18.941255  102501 command_runner.go:130] > 6d2453b4ccbd
	I1009 23:20:18.941259  102501 command_runner.go:130] > 225f665e1777
	I1009 23:20:18.941266  102501 command_runner.go:130] > b387ab7d9878
	I1009 23:20:18.941273  102501 command_runner.go:130] > 84496d0bb2a9
	I1009 23:20:18.941284  102501 command_runner.go:130] > 3c097ec42a79
	I1009 23:20:18.941288  102501 command_runner.go:130] > acc138948996
	I1009 23:20:18.941291  102501 command_runner.go:130] > ac407d90f64c
	I1009 23:20:18.941294  102501 command_runner.go:130] > 66ffe93c503b
	I1009 23:20:18.941298  102501 command_runner.go:130] > 28ea40be486c
	I1009 23:20:18.941310  102501 command_runner.go:130] > 4a9e9455ca75
	I1009 23:20:18.941316  102501 command_runner.go:130] > 866b3c026498
	I1009 23:20:18.941341  102501 command_runner.go:130] > 6807030f028b
	I1009 23:20:18.941348  102501 command_runner.go:130] > 1f3e1b00829d
	I1009 23:20:18.941352  102501 command_runner.go:130] > 8f01da7e8d17
	I1009 23:20:18.941355  102501 command_runner.go:130] > 41105b4ddb01
	I1009 23:20:18.941359  102501 command_runner.go:130] > 7ed3b793352f
	I1009 23:20:18.942364  102501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 23:20:18.957255  102501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:20:18.965998  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1009 23:20:18.966020  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1009 23:20:18.966027  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1009 23:20:18.966040  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:20:18.966076  102501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:20:18.966121  102501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:20:18.974215  102501 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1009 23:20:18.974250  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:19.097836  102501 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:20:19.097866  102501 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1009 23:20:19.097877  102501 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1009 23:20:19.097887  102501 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 23:20:19.097896  102501 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1009 23:20:19.097907  102501 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1009 23:20:19.097921  102501 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1009 23:20:19.097933  102501 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1009 23:20:19.097951  102501 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1009 23:20:19.097964  102501 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 23:20:19.097981  102501 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 23:20:19.097989  102501 command_runner.go:130] > [certs] Using the existing "sa" key
	I1009 23:20:19.098013  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:19.149974  102501 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:20:19.370640  102501 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:20:19.439952  102501 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:20:19.587309  102501 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:20:19.936787  102501 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:20:19.939697  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.131820  102501 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:20:20.131849  102501 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:20:20.131856  102501 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1009 23:20:20.131884  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.227804  102501 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:20:20.228566  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:20:20.242318  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:20:20.245953  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:20:20.251081  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.324130  102501 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:20:20.324282  102501 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:20:20.324357  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:20.341546  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:20.855041  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.354529  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.855051  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.919528  102501 command_runner.go:130] > 1551
	I1009 23:20:21.919913  102501 api_server.go:72] duration metric: took 1.595628276s to wait for apiserver process to appear ...
	I1009 23:20:21.919937  102501 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:20:21.919958  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.240612  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 23:20:26.240642  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 23:20:26.240656  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.281987  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 23:20:26.282014  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 23:20:26.782530  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.799701  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1009 23:20:26.799737  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1009 23:20:27.282386  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:27.291905  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1009 23:20:27.291952  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1009 23:20:27.782074  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:27.787200  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1009 23:20:27.787284  102501 round_trippers.go:463] GET https://192.168.39.167:8443/version
	I1009 23:20:27.787294  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:27.787303  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:27.787309  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:27.795028  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:27.795050  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:27.795061  102501 round_trippers.go:580]     Content-Length: 263
	I1009 23:20:27.795068  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:27 GMT
	I1009 23:20:27.795081  102501 round_trippers.go:580]     Audit-Id: 8612a343-110c-4656-9675-619c27f9fb3a
	I1009 23:20:27.795092  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:27.795100  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:27.795113  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:27.795121  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:27.795153  102501 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1009 23:20:27.795240  102501 api_server.go:141] control plane version: v1.28.2
	I1009 23:20:27.795257  102501 api_server.go:131] duration metric: took 5.875313407s to wait for apiserver health ...
	I1009 23:20:27.795267  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:20:27.795275  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:27.797117  102501 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 23:20:27.798586  102501 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:20:27.805179  102501 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1009 23:20:27.805206  102501 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1009 23:20:27.805215  102501 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1009 23:20:27.805222  102501 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:20:27.805227  102501 command_runner.go:130] > Access: 2023-10-09 23:19:55.241785686 +0000
	I1009 23:20:27.805232  102501 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1009 23:20:27.805237  102501 command_runner.go:130] > Change: 2023-10-09 23:19:53.416785686 +0000
	I1009 23:20:27.805240  102501 command_runner.go:130] >  Birth: -
	I1009 23:20:27.805418  102501 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 23:20:27.805435  102501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:20:27.849027  102501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:20:29.016702  102501 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:29.016723  102501 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:29.016729  102501 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1009 23:20:29.016734  102501 command_runner.go:130] > daemonset.apps/kindnet configured
	I1009 23:20:29.016759  102501 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.167702436s)
	I1009 23:20:29.016782  102501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:20:29.016915  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:29.016927  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.016934  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.016943  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.021909  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:29.021948  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.021959  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:28 GMT
	I1009 23:20:29.021965  102501 round_trippers.go:580]     Audit-Id: 518acb25-734a-4664-9406-181a1a4fb98e
	I1009 23:20:29.021971  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.021980  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.021988  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.022001  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.023404  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85005 chars]
	I1009 23:20:29.027593  102501 system_pods.go:59] 12 kube-system pods found
	I1009 23:20:29.027628  102501 system_pods.go:61] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 23:20:29.027636  102501 system_pods.go:61] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 23:20:29.027649  102501 system_pods.go:61] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:29.027655  102501 system_pods.go:61] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 23:20:29.027659  102501 system_pods.go:61] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:29.027671  102501 system_pods.go:61] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 23:20:29.027678  102501 system_pods.go:61] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 23:20:29.027686  102501 system_pods.go:61] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:29.027690  102501 system_pods.go:61] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:29.027695  102501 system_pods.go:61] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 23:20:29.027709  102501 system_pods.go:61] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 23:20:29.027719  102501 system_pods.go:61] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 23:20:29.027726  102501 system_pods.go:74] duration metric: took 10.933921ms to wait for pod list to return data ...
	I1009 23:20:29.027735  102501 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:29.027789  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:29.027796  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.027803  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.027809  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.030194  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.030215  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.030233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.030242  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:28 GMT
	I1009 23:20:29.030250  102501 round_trippers.go:580]     Audit-Id: da209275-c78c-4cc2-9c60-1d8e90dd2d95
	I1009 23:20:29.030258  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.030266  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.030284  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.030549  102501 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9584 chars]
	I1009 23:20:29.031202  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:29.031226  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:29.031237  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:29.031241  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:29.031244  102501 node_conditions.go:105] duration metric: took 3.502641ms to run NodePressure ...
	I1009 23:20:29.031264  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:29.309864  102501 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1009 23:20:29.309885  102501 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1009 23:20:29.310008  102501 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1009 23:20:29.310123  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1009 23:20:29.310134  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.310142  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.310148  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.315687  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:29.315703  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.315710  102501 round_trippers.go:580]     Audit-Id: f6faf9c8-10a1-4bf8-b7a4-df5bc43a94d6
	I1009 23:20:29.315746  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.315761  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.315769  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.315776  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.315785  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.316152  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1200"},"items":[{"metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1133","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29793 chars]
	I1009 23:20:29.317202  102501 kubeadm.go:787] kubelet initialised
	I1009 23:20:29.317222  102501 kubeadm.go:788] duration metric: took 7.190657ms waiting for restarted kubelet to initialise ...
	I1009 23:20:29.317232  102501 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:29.317307  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:29.317322  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.317333  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.317347  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.323980  102501 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 23:20:29.323994  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.324001  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.324006  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.324011  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.324016  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.324021  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.324025  102501 round_trippers.go:580]     Audit-Id: 8eebb8f5-311c-4ad2-9ea9-d8d0bd3c654e
	I1009 23:20:29.326041  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1200"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85005 chars]
	I1009 23:20:29.328597  102501 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.328688  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:29.328701  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.328712  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.328727  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.330843  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.330862  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.330871  102501 round_trippers.go:580]     Audit-Id: 3bb7794f-1936-43e6-b1ba-216c53355977
	I1009 23:20:29.330879  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.330887  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.330896  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.330907  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.330912  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.331079  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:29.331467  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.331478  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.331485  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.331491  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.338607  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:29.338626  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.338635  102501 round_trippers.go:580]     Audit-Id: a7a6a9af-e924-462c-8ae8-526954ff5f5b
	I1009 23:20:29.338646  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.338654  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.338662  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.338671  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.338680  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.338820  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.339109  102501 pod_ready.go:97] node "multinode-921619" hosting pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.339132  102501 pod_ready.go:81] duration metric: took 10.516338ms waiting for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.339144  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.339163  102501 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.339218  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-921619
	I1009 23:20:29.339227  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.339237  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.339247  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.341167  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:29.341180  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.341186  102501 round_trippers.go:580]     Audit-Id: b707b15b-a4ea-4019-b9f8-499fdd5cfcbd
	I1009 23:20:29.341191  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.341199  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.341204  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.341210  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.341216  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.341596  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1133","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6306 chars]
	I1009 23:20:29.341943  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.341953  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.341960  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.341966  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.344226  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.344243  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.344252  102501 round_trippers.go:580]     Audit-Id: 7bfcd7f6-e563-45f4-bfeb-328a88a90153
	I1009 23:20:29.344261  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.344268  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.344276  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.344285  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.344295  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.344570  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.344854  102501 pod_ready.go:97] node "multinode-921619" hosting pod "etcd-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.344871  102501 pod_ready.go:81] duration metric: took 5.701111ms waiting for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.344878  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "etcd-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.344890  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.344935  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-921619
	I1009 23:20:29.344939  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.344945  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.344954  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.349124  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:29.349140  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.349148  102501 round_trippers.go:580]     Audit-Id: 914866a0-d61d-46f0-bc72-be03cc62eda6
	I1009 23:20:29.349156  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.349163  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.349169  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.349177  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.349184  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.349759  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-921619","namespace":"kube-system","uid":"bb483c09-0ecb-447b-a339-2494340bda70","resourceVersion":"1135","creationTimestamp":"2023-10-09T23:13:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.167:8443","kubernetes.io/config.hash":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.mirror":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.seen":"2023-10-09T23:13:02.202089577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7860 chars]
	I1009 23:20:29.350119  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.350136  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.350146  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.350155  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.353685  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:29.353701  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.353709  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.353717  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.353724  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.353735  102501 round_trippers.go:580]     Audit-Id: 92a4993b-7981-4e68-875f-a5a017aa0a98
	I1009 23:20:29.353743  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.353755  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.353923  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.354196  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-apiserver-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.354215  102501 pod_ready.go:81] duration metric: took 9.320646ms waiting for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.354226  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-apiserver-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.354237  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.354275  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-921619
	I1009 23:20:29.354282  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.354288  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.354294  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.357803  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:29.357831  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.357840  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.357849  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.357855  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.357863  102501 round_trippers.go:580]     Audit-Id: 250daee4-5637-49ae-bde7-fa6792df4c3e
	I1009 23:20:29.357871  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.357880  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.358306  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-921619","namespace":"kube-system","uid":"e39c9043-b776-4ae0-b79a-528bf4fe7198","resourceVersion":"1137","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.mirror":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I1009 23:20:29.417920  102501 request.go:629] Waited for 59.245609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.418011  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.418019  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.418035  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.418054  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.420290  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.420307  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.420314  102501 round_trippers.go:580]     Audit-Id: d8fa4025-6f68-4310-aa2e-b37f8f4a4a3a
	I1009 23:20:29.420320  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.420325  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.420330  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.420336  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.420341  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.420585  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.420920  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-controller-manager-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.420941  102501 pod_ready.go:81] duration metric: took 66.69663ms waiting for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.420954  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-controller-manager-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.420967  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.617395  102501 request.go:629] Waited for 196.359758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:29.617477  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:29.617482  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.617519  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.617532  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.620332  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.620354  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.620363  102501 round_trippers.go:580]     Audit-Id: d1d36c12-4d23-42bf-bf28-71af7c14b1c7
	I1009 23:20:29.620371  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.620378  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.620386  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.620392  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.620399  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.620669  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6nfdb","generateName":"kube-proxy-","namespace":"kube-system","uid":"5cbea5fb-98dd-4276-9b89-588271309935","resourceVersion":"1087","creationTimestamp":"2023-10-09T23:15:07Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1009 23:20:29.817514  102501 request.go:629] Waited for 196.366236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:29.817575  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:29.817580  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.817590  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.817597  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.820515  102501 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1009 23:20:29.820534  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.820541  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.820546  102501 round_trippers.go:580]     Content-Length: 210
	I1009 23:20:29.820551  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.820556  102501 round_trippers.go:580]     Audit-Id: 8227eb14-3466-4250-98ba-021ab11627ce
	I1009 23:20:29.820562  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.820569  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.820574  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.820674  102501 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-921619-m03\" not found","reason":"NotFound","details":{"name":"multinode-921619-m03","kind":"nodes"},"code":404}
	I1009 23:20:29.820884  102501 pod_ready.go:97] node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:29.820904  102501 pod_ready.go:81] duration metric: took 399.925167ms waiting for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.820913  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:29.820920  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.017460  102501 request.go:629] Waited for 196.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:30.017536  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:30.017544  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.017553  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.017581  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.020180  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.020204  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.020213  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.020222  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.020229  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.020237  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:30.020244  102501 round_trippers.go:580]     Audit-Id: 98a4d584-c326-4dc7-9193-e761ac4fd0e3
	I1009 23:20:30.020253  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.020442  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlflz","generateName":"kube-proxy-","namespace":"kube-system","uid":"18003542-04f4-4330-8054-2e82da1f94f0","resourceVersion":"973","creationTimestamp":"2023-10-09T23:14:14Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:14:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I1009 23:20:30.217421  102501 request.go:629] Waited for 196.380894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:30.217544  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:30.217553  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.217562  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.217568  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.220192  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.220217  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.220232  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.220240  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.220248  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.220256  102501 round_trippers.go:580]     Audit-Id: a89f2229-dc84-4627-b943-7332ce83a64c
	I1009 23:20:30.220263  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.220271  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.220430  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619-m02","uid":"fccae5d8-c831-4dfb-91f9-523a6eb81706","resourceVersion":"992","creationTimestamp":"2023-10-09T23:18:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:18:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1009 23:20:30.220770  102501 pod_ready.go:92] pod "kube-proxy-qlflz" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:30.220789  102501 pod_ready.go:81] duration metric: took 399.862512ms waiting for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.220799  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.417137  102501 request.go:629] Waited for 196.269389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:30.417206  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:30.417211  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.417227  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.417233  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.419855  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.419881  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.419892  102501 round_trippers.go:580]     Audit-Id: e1af97bb-0e90-44f6-8d14-ee4fff9bd10f
	I1009 23:20:30.419904  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.419912  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.419920  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.419928  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.419937  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.420094  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t28g5","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339","resourceVersion":"1150","creationTimestamp":"2023-10-09T23:13:22Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I1009 23:20:30.616959  102501 request.go:629] Waited for 196.346937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:30.617038  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:30.617046  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.617057  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.617066  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.619760  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.619783  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.619790  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.619795  102501 round_trippers.go:580]     Audit-Id: cb4c9dca-c672-4ccc-a686-5159e6fd16e9
	I1009 23:20:30.619802  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.619810  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.619819  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.619827  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.619954  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:30.620396  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-proxy-t28g5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:30.620418  102501 pod_ready.go:81] duration metric: took 399.611293ms waiting for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:30.620432  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-proxy-t28g5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:30.620448  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.817964  102501 request.go:629] Waited for 197.418808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:30.818066  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:30.818078  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.818090  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.818102  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.820622  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.820646  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.820653  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.820659  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.820664  102501 round_trippers.go:580]     Audit-Id: fa0f7a63-74e5-4e41-8c2a-65f36ffa341f
	I1009 23:20:30.820670  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.820679  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.820693  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.820982  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-921619","namespace":"kube-system","uid":"9dc6b59f-e995-4b55-a755-8190f5c2d586","resourceVersion":"1140","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.mirror":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I1009 23:20:31.017899  102501 request.go:629] Waited for 196.384549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.017963  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.017973  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.017988  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.018026  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.020942  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.020966  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.020976  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.020988  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.020997  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:31.021005  102501 round_trippers.go:580]     Audit-Id: 97e01f4c-0d60-4628-8a3e-51d05eaa36c4
	I1009 23:20:31.021013  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.021025  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.021146  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.021600  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-scheduler-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:31.021649  102501 pod_ready.go:81] duration metric: took 401.189696ms waiting for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:31.021666  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-scheduler-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:31.021677  102501 pod_ready.go:38] duration metric: took 1.704428487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:31.021702  102501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 23:20:31.033315  102501 command_runner.go:130] > -16
	I1009 23:20:31.033350  102501 ops.go:34] apiserver oom_adj: -16
	I1009 23:20:31.033359  102501 kubeadm.go:640] restartCluster took 22.158555077s
	I1009 23:20:31.033368  102501 kubeadm.go:406] StartCluster complete in 22.18745007s
	I1009 23:20:31.033390  102501 settings.go:142] acquiring lock: {Name:mkfad4f7073b09104d7b3dee9986ba7dad256c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:31.033474  102501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:31.034150  102501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/kubeconfig: {Name:mkee061910efe3fb616ee347e2e0b1635aa74f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:31.034392  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 23:20:31.034426  102501 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1009 23:20:31.037258  102501 out.go:177] * Enabled addons: 
	I1009 23:20:31.034672  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:31.034742  102501 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:31.038654  102501 addons.go:502] enable addons completed in 4.249113ms: enabled=[]
	I1009 23:20:31.039000  102501 kapi.go:59] client config for multinode-921619: &rest.Config{Host:"https://192.168.39.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key", CAFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c11c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:31.039473  102501 round_trippers.go:463] GET https://192.168.39.167:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:20:31.039492  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.039504  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.039518  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.042220  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.042241  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.042251  102501 round_trippers.go:580]     Content-Length: 292
	I1009 23:20:31.042259  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:31.042267  102501 round_trippers.go:580]     Audit-Id: 95a565c6-0506-4cce-a2bb-068426327003
	I1009 23:20:31.042277  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.042289  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.042299  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.042311  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.042344  102501 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b33a8d-11f2-4ba8-a069-c1ca4e52a49d","resourceVersion":"1199","creationTimestamp":"2023-10-09T23:13:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1009 23:20:31.042517  102501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-921619" context rescaled to 1 replicas
	I1009 23:20:31.042557  102501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 23:20:31.045125  102501 out.go:177] * Verifying Kubernetes components...
	I1009 23:20:31.046508  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:31.180668  102501 command_runner.go:130] > apiVersion: v1
	I1009 23:20:31.180694  102501 command_runner.go:130] > data:
	I1009 23:20:31.180703  102501 command_runner.go:130] >   Corefile: |
	I1009 23:20:31.180709  102501 command_runner.go:130] >     .:53 {
	I1009 23:20:31.180715  102501 command_runner.go:130] >         log
	I1009 23:20:31.180721  102501 command_runner.go:130] >         errors
	I1009 23:20:31.180726  102501 command_runner.go:130] >         health {
	I1009 23:20:31.180736  102501 command_runner.go:130] >            lameduck 5s
	I1009 23:20:31.180741  102501 command_runner.go:130] >         }
	I1009 23:20:31.180752  102501 command_runner.go:130] >         ready
	I1009 23:20:31.180761  102501 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1009 23:20:31.180768  102501 command_runner.go:130] >            pods insecure
	I1009 23:20:31.180791  102501 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1009 23:20:31.180802  102501 command_runner.go:130] >            ttl 30
	I1009 23:20:31.180808  102501 command_runner.go:130] >         }
	I1009 23:20:31.180816  102501 command_runner.go:130] >         prometheus :9153
	I1009 23:20:31.180823  102501 command_runner.go:130] >         hosts {
	I1009 23:20:31.180832  102501 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1009 23:20:31.180847  102501 command_runner.go:130] >            fallthrough
	I1009 23:20:31.180854  102501 command_runner.go:130] >         }
	I1009 23:20:31.180863  102501 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1009 23:20:31.180876  102501 command_runner.go:130] >            max_concurrent 1000
	I1009 23:20:31.180886  102501 command_runner.go:130] >         }
	I1009 23:20:31.180894  102501 command_runner.go:130] >         cache 30
	I1009 23:20:31.180907  102501 command_runner.go:130] >         loop
	I1009 23:20:31.180918  102501 command_runner.go:130] >         reload
	I1009 23:20:31.180925  102501 command_runner.go:130] >         loadbalance
	I1009 23:20:31.180932  102501 command_runner.go:130] >     }
	I1009 23:20:31.180940  102501 command_runner.go:130] > kind: ConfigMap
	I1009 23:20:31.180947  102501 command_runner.go:130] > metadata:
	I1009 23:20:31.180955  102501 command_runner.go:130] >   creationTimestamp: "2023-10-09T23:13:10Z"
	I1009 23:20:31.180964  102501 command_runner.go:130] >   name: coredns
	I1009 23:20:31.180972  102501 command_runner.go:130] >   namespace: kube-system
	I1009 23:20:31.180980  102501 command_runner.go:130] >   resourceVersion: "392"
	I1009 23:20:31.180989  102501 command_runner.go:130] >   uid: 3631ac3c-f1e2-4b20-ba21-bc50514ba3c3
	I1009 23:20:31.181101  102501 node_ready.go:35] waiting up to 6m0s for node "multinode-921619" to be "Ready" ...
	I1009 23:20:31.181135  102501 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1009 23:20:31.217500  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.217524  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.217537  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.217543  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.220138  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.220156  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.220163  102501 round_trippers.go:580]     Audit-Id: f27f37e9-7444-4b59-9f23-f3d455a0ea11
	I1009 23:20:31.220168  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.220173  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.220178  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.220186  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.220194  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.220412  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.417158  102501 request.go:629] Waited for 196.304477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.417235  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.417246  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.417261  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.417270  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.420791  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:31.420812  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.420824  102501 round_trippers.go:580]     Audit-Id: 15ab2a29-df6e-41c6-b262-45effc55088f
	I1009 23:20:31.420830  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.420836  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.420842  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.420851  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.420859  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.421729  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.922831  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.922850  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.922862  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.922884  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.928746  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:31.928769  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.928776  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.928782  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.928787  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.928792  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.928799  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.928811  102501 round_trippers.go:580]     Audit-Id: 2f128c8b-da62-4cc1-88d7-2e80bc044c62
	I1009 23:20:31.929877  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:32.422580  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:32.422603  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:32.422612  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:32.422618  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:32.425437  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:32.425460  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:32.425467  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:32.425472  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:32 GMT
	I1009 23:20:32.425482  102501 round_trippers.go:580]     Audit-Id: f51c1120-577c-42ba-8224-490b9dfbb5e6
	I1009 23:20:32.425488  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:32.425493  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:32.425498  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:32.425658  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:32.923122  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:32.923145  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:32.923154  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:32.923160  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:32.926112  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:32.926132  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:32.926143  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:32 GMT
	I1009 23:20:32.926151  102501 round_trippers.go:580]     Audit-Id: f63787e8-5fe1-4121-9889-46b7b827e392
	I1009 23:20:32.926158  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:32.926166  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:32.926172  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:32.926180  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:32.926410  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:33.423129  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.423153  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.423161  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.423167  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.425944  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.425966  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.425975  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.425982  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.425990  102501 round_trippers.go:580]     Audit-Id: 58b0dce3-207f-4190-b22d-47e7b23c5a53
	I1009 23:20:33.425998  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.426007  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.426014  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.426212  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.426545  102501 node_ready.go:49] node "multinode-921619" has status "Ready":"True"
	I1009 23:20:33.426561  102501 node_ready.go:38] duration metric: took 2.245426892s waiting for node "multinode-921619" to be "Ready" ...
	I1009 23:20:33.426570  102501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:33.426619  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:33.426626  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.426640  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.426646  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.432232  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:33.432248  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.432257  102501 round_trippers.go:580]     Audit-Id: 2cd4d682-2a6d-4297-8c1e-80c6bd1a3ae3
	I1009 23:20:33.432266  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.432274  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.432282  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.432290  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.432301  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.433210  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1213"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84415 chars]
	I1009 23:20:33.435800  102501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:33.435880  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.435889  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.435897  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.435902  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.441427  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:33.441452  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.441461  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.441467  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.441472  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.441477  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.441486  102501 round_trippers.go:580]     Audit-Id: 0d66d537-177f-4e10-837e-2948c506db3d
	I1009 23:20:33.441491  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.441616  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.442045  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.442058  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.442065  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.442070  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.444241  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.444257  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.444266  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.444275  102501 round_trippers.go:580]     Audit-Id: b0ec77d6-df06-4a82-a55c-7d7e907f46c5
	I1009 23:20:33.444283  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.444292  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.444301  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.444311  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.444460  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.444775  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.444788  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.444798  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.444806  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.446862  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.446882  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.446892  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.446900  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.446910  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.446919  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.446924  102501 round_trippers.go:580]     Audit-Id: cbfc2cb5-cecf-432e-8a81-e0b3843571e6
	I1009 23:20:33.446930  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.447085  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.447492  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.447507  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.447517  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.447531  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.449190  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:33.449203  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.449209  102501 round_trippers.go:580]     Audit-Id: 7f85b8af-ccac-4847-a48e-1983ca2a27a9
	I1009 23:20:33.449214  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.449219  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.449224  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.449245  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.449251  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.449374  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.950490  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.950512  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.950521  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.950527  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.953588  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:33.953609  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.953621  102501 round_trippers.go:580]     Audit-Id: d3668535-5fdc-4c30-9f13-866cd609737d
	I1009 23:20:33.953629  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.953641  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.953650  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.953661  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.953671  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.953951  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.954405  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.954416  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.954424  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.954429  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.956577  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.956592  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.956601  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.956608  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.956617  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.956627  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.956637  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.956653  102501 round_trippers.go:580]     Audit-Id: d894d933-6de0-40c4-8e47-8860e6558204
	I1009 23:20:33.956879  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:34.450606  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:34.450634  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.450648  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.450656  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.453643  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.453661  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.453668  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.453674  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.453679  102501 round_trippers.go:580]     Audit-Id: c2486ecd-2503-4c31-818a-826c3eca4681
	I1009 23:20:34.453684  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.453689  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.453694  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.453878  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:34.454428  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:34.454447  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.454471  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.454481  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.456524  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.456538  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.456544  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.456550  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.456556  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.456562  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.456570  102501 round_trippers.go:580]     Audit-Id: f01a4fed-03f2-482c-a932-c76f5f3a978e
	I1009 23:20:34.456575  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.456840  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:34.950585  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:34.950609  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.950617  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.950623  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.954173  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:34.954193  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.954216  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.954222  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.954229  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.954238  102501 round_trippers.go:580]     Audit-Id: 07c18ec1-717a-4b19-9d72-21b74a6b64ad
	I1009 23:20:34.954246  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.954255  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.954449  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:34.954915  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:34.954928  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.954935  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.954941  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.957289  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.957308  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.957316  102501 round_trippers.go:580]     Audit-Id: 99ce2485-a0e0-43ce-8c06-54f21abe6301
	I1009 23:20:34.957325  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.957330  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.957338  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.957344  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.957349  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.957886  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:35.450656  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:35.450687  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.450700  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.450709  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.453447  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.453465  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.453472  102501 round_trippers.go:580]     Audit-Id: 1b799ce7-2c5d-424f-8268-673ef85820b6
	I1009 23:20:35.453478  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.453483  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.453488  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.453493  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.453498  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.453691  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:35.454289  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:35.454309  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.454320  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.454329  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.456449  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.456468  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.456477  102501 round_trippers.go:580]     Audit-Id: 41356d37-7f1a-42e5-9190-2894a1af2276
	I1009 23:20:35.456487  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.456494  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.456502  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.456511  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.456518  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.456924  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:35.457211  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:35.950676  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:35.950715  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.950725  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.950733  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.953503  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.953521  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.953531  102501 round_trippers.go:580]     Audit-Id: 8b1fbc96-38f3-4d0e-90a9-7631660dab75
	I1009 23:20:35.953536  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.953541  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.953546  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.953551  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.953555  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.953765  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:35.954413  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:35.954429  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.954439  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.954448  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.956765  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.956788  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.956796  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.956802  102501 round_trippers.go:580]     Audit-Id: c8c65a02-3ace-4e9d-bdb5-554a2b21a08d
	I1009 23:20:35.956807  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.956821  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.956828  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.956837  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.956942  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:36.450595  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:36.450617  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.450625  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.450631  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.453589  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.453613  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.453625  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.453633  102501 round_trippers.go:580]     Audit-Id: 4c2e7bef-4838-4567-9c22-574f60a5cbfc
	I1009 23:20:36.453640  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.453648  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.453656  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.453666  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.453847  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:36.454314  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:36.454325  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.454339  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.454348  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.456697  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.456713  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.456720  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.456725  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.456730  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.456737  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.456742  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.456747  102501 round_trippers.go:580]     Audit-Id: a28d550e-72fc-411a-b2c0-b82691b3d1a3
	I1009 23:20:36.456933  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:36.950614  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:36.950637  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.950646  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.950652  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.953322  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.953348  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.953359  102501 round_trippers.go:580]     Audit-Id: 9f94d7cf-e192-4798-a808-ae495b6a5dc0
	I1009 23:20:36.953377  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.953383  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.953388  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.953393  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.953399  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.953615  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:36.954051  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:36.954063  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.954070  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.954075  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.956087  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.956101  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.956107  102501 round_trippers.go:580]     Audit-Id: ae602bc2-ec26-4b82-a7ba-9386dc3ced98
	I1009 23:20:36.956112  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.956117  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.956125  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.956143  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.956159  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.956559  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:37.450302  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:37.450334  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.450347  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.450357  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.453118  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.453136  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.453143  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.453148  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.453153  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.453158  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.453164  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.453168  102501 round_trippers.go:580]     Audit-Id: d8f4b962-c4d2-4ef2-8910-e4ef9cf07e7a
	I1009 23:20:37.453347  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:37.453949  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:37.453963  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.453974  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.453985  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.456136  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.456148  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.456154  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.456162  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.456167  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.456172  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.456177  102501 round_trippers.go:580]     Audit-Id: 8039e311-517e-441c-a4ce-3f7153387b2c
	I1009 23:20:37.456182  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.456378  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:37.457296  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:37.950553  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:37.950581  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.950593  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.950602  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.953661  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:37.953687  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.953697  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.953705  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.953714  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.953722  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.953730  102501 round_trippers.go:580]     Audit-Id: 8581c215-d165-4428-b59b-cd6196f50f8c
	I1009 23:20:37.953737  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.954054  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:37.954524  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:37.954535  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.954543  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.954548  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.957060  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.957081  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.957090  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.957099  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.957106  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.957115  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.957122  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.957129  102501 round_trippers.go:580]     Audit-Id: 5487da8d-0781-4273-9481-e9476cb19a26
	I1009 23:20:37.957757  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:38.450541  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:38.450573  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.450586  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.450632  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.453295  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:38.453314  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.453323  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.453332  102501 round_trippers.go:580]     Audit-Id: e8294c39-f93d-4bd6-90ac-a4a425f619ca
	I1009 23:20:38.453338  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.453346  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.453353  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.453362  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.453753  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:38.454201  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:38.454215  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.454222  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.454228  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.456835  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:38.456856  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.456865  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.456873  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.456880  102501 round_trippers.go:580]     Audit-Id: 8661ad97-c4ac-428f-b70b-04c7d1742d82
	I1009 23:20:38.456887  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.456894  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.456906  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.457908  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:38.950784  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:38.950810  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.950819  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.950825  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.955797  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:38.955817  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.955827  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.955834  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.955842  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.955849  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.955856  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.955870  102501 round_trippers.go:580]     Audit-Id: 0476ef7e-c3c4-408f-9971-2cfff635aa22
	I1009 23:20:38.956478  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:38.956961  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:38.956975  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.956985  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.956994  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.960366  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:38.960388  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.960397  102501 round_trippers.go:580]     Audit-Id: 939873c9-4aa3-4709-a0eb-f4ae58af1a39
	I1009 23:20:38.960405  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.960414  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.960422  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.960431  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.960439  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.960636  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.450297  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:39.450336  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.450349  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.450358  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.453393  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:39.453418  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.453428  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.453436  102501 round_trippers.go:580]     Audit-Id: 02006573-bdfe-485c-96e5-865d0d5dc79a
	I1009 23:20:39.453444  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.453452  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.453458  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.453466  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.453688  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:39.454212  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:39.454227  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.454235  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.454243  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.456618  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.456639  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.456648  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.456655  102501 round_trippers.go:580]     Audit-Id: 0813b480-1559-4de5-8d1e-6a66e1806d0a
	I1009 23:20:39.456663  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.456670  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.456677  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.456685  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.456888  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.950666  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:39.950688  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.950697  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.950703  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.953686  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.953711  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.953721  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.953729  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.953736  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.953744  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.953752  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.953760  102501 round_trippers.go:580]     Audit-Id: 2d09ac8b-6324-4c53-8889-4495fb395f12
	I1009 23:20:39.954305  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:39.954825  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:39.954839  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.954847  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.954852  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.957134  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.957151  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.957161  102501 round_trippers.go:580]     Audit-Id: cdfda355-ccb5-41c2-aa66-e6ac8badbb2a
	I1009 23:20:39.957170  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.957182  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.957197  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.957206  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.957219  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.957404  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.957726  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:40.450024  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:40.450045  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.450054  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.450060  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.453142  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:40.453168  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.453179  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.453187  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.453195  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.453202  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.453210  102501 round_trippers.go:580]     Audit-Id: 4baf2498-8a31-4f5e-b0ff-af643558e31a
	I1009 23:20:40.453217  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.453430  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:40.453938  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:40.453953  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.453960  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.453966  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.456238  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.456252  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.456258  102501 round_trippers.go:580]     Audit-Id: 48d647bb-7746-440b-a8c9-bfc473f05f84
	I1009 23:20:40.456264  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.456272  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.456280  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.456293  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.456306  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.456441  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:40.950136  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:40.950161  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.950170  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.950176  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.952915  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.952944  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.952954  102501 round_trippers.go:580]     Audit-Id: 5cadf7f4-8cde-450d-ac16-88e7044a8cb7
	I1009 23:20:40.952961  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.952969  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.952978  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.952987  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.952996  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.953378  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:40.953933  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:40.953950  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.953962  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.953975  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.956174  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.956191  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.956198  102501 round_trippers.go:580]     Audit-Id: cae46e89-2922-4dcf-b9ab-334199af84a8
	I1009 23:20:40.956204  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.956211  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.956217  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.956224  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.956234  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.956388  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:41.450010  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:41.450033  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.450042  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.450048  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.453201  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:41.453226  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.453237  102501 round_trippers.go:580]     Audit-Id: b156692c-6495-4edf-95bf-0ee131f2d945
	I1009 23:20:41.453249  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.453256  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.453261  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.453268  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.453273  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.453837  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:41.454410  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:41.454427  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.454438  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.454448  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.456567  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.456586  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.456595  102501 round_trippers.go:580]     Audit-Id: c9dc1713-32ca-4291-9f10-a4a8099346b2
	I1009 23:20:41.456606  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.456614  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.456627  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.456636  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.456646  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.456842  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:41.950542  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:41.950566  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.950575  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.950582  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.953575  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.953599  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.953606  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.953612  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.953618  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.953628  102501 round_trippers.go:580]     Audit-Id: 000ce8d0-2dbb-40a6-b484-ad04e5b43314
	I1009 23:20:41.953635  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.953641  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.953787  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:41.954263  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:41.954275  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.954282  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.954288  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.956309  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.956329  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.956338  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.956345  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.956356  102501 round_trippers.go:580]     Audit-Id: 2a0a22bd-96d8-4498-bc83-d56ca261bc9e
	I1009 23:20:41.956363  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.956374  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.956385  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.956598  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:42.450250  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:42.450273  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.450282  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.450288  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.453440  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.453458  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.453465  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.453471  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.453477  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.453482  102501 round_trippers.go:580]     Audit-Id: ef242ea2-170d-4dbb-9f84-095d55874b92
	I1009 23:20:42.453490  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.453496  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.453680  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:42.454149  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:42.454163  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.454172  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.454178  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.458035  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.458050  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.458059  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.458067  102501 round_trippers.go:580]     Audit-Id: 2550c4db-473c-43e5-a656-9ae7b1e3ec7f
	I1009 23:20:42.458075  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.458085  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.458093  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.458102  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.458829  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:42.459276  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:42.950345  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:42.950365  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.950376  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.950383  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.952895  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:42.952914  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.952922  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.952930  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.952939  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.952946  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.952953  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.952962  102501 round_trippers.go:580]     Audit-Id: 365954c9-b6d3-4982-9b8a-94a8c6a38b24
	I1009 23:20:42.953175  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:42.953641  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:42.953653  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.953661  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.953667  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.956880  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.956901  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.956908  102501 round_trippers.go:580]     Audit-Id: d83801ca-ef60-44e6-ab20-505d75c7f0bc
	I1009 23:20:42.956918  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.956926  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.956936  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.956944  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.956956  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.957072  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:43.450745  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:43.450768  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.450777  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.450782  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.453815  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:43.453836  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.453844  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.453851  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.453858  102501 round_trippers.go:580]     Audit-Id: 729df228-740a-4ccd-810f-d5815b39d10f
	I1009 23:20:43.453867  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.453875  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.453882  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.454086  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:43.454668  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:43.454681  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.454692  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.454702  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.457415  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:43.457436  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.457446  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.457455  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.457463  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.457475  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.457490  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.457498  102501 round_trippers.go:580]     Audit-Id: 10531947-3aeb-48d5-a013-fee9f94bf55c
	I1009 23:20:43.457995  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:43.950709  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:43.950731  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.950744  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.950750  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.954962  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:43.954988  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.954999  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.955008  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.955016  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.955023  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.955034  102501 round_trippers.go:580]     Audit-Id: 3e5595b0-6e28-4125-937a-1bfe46bbd865
	I1009 23:20:43.955042  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.955197  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:43.955673  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:43.955685  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.955693  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.955698  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.957906  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:43.957923  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.957933  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.957942  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.957949  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.957957  102501 round_trippers.go:580]     Audit-Id: 8e59839b-4b8a-4355-9bd0-f5905da14813
	I1009 23:20:43.957964  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.957972  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.958216  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.450571  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:44.450599  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.450612  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.450621  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.453574  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.453590  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.453598  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.453603  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.453608  102501 round_trippers.go:580]     Audit-Id: 701a95e1-644f-47ba-a8fe-039e7e489cf5
	I1009 23:20:44.453613  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.453619  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.453624  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.453852  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:44.454473  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.454491  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.454501  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.454516  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.456695  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.456717  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.456728  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.456736  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.456742  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.456753  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.456761  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.456774  102501 round_trippers.go:580]     Audit-Id: 8259a205-4926-466b-8eb4-dc7f362828e1
	I1009 23:20:44.456993  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.950904  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:44.950938  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.950951  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.950961  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.954163  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:44.954182  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.954191  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.954199  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.954207  102501 round_trippers.go:580]     Audit-Id: f17773d0-5364-4e3c-abfa-567d417ce0e4
	I1009 23:20:44.954214  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.954223  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.954233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.954673  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I1009 23:20:44.955122  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.955133  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.955140  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.955146  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.957283  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.957299  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.957306  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.957315  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.957321  102501 round_trippers.go:580]     Audit-Id: d0478365-3c47-4914-acef-c750200ca712
	I1009 23:20:44.957329  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.957335  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.957343  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.957669  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.957968  102501 pod_ready.go:92] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.957986  102501 pod_ready.go:81] duration metric: took 11.522164121s waiting for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.957998  102501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.958059  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-921619
	I1009 23:20:44.958069  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.958079  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.958089  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.960240  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.960256  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.960262  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.960268  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.960273  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.960278  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.960282  102501 round_trippers.go:580]     Audit-Id: 974f8aa0-1058-41d8-87c4-c2bade8f9075
	I1009 23:20:44.960291  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.960901  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1236","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6082 chars]
	I1009 23:20:44.961281  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.961292  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.961299  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.961305  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.963201  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.963219  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.963228  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.963236  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.963258  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.963270  102501 round_trippers.go:580]     Audit-Id: 7cd46612-3ce4-48dd-999b-0eaa5ffba4c1
	I1009 23:20:44.963283  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.963295  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.963447  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.963749  102501 pod_ready.go:92] pod "etcd-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.963763  102501 pod_ready.go:81] duration metric: took 5.759104ms waiting for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.963780  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.963828  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-921619
	I1009 23:20:44.963835  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.963842  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.963848  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.966062  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.966078  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.966092  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.966099  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.966107  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.966115  102501 round_trippers.go:580]     Audit-Id: ea54d090-70f8-471b-942e-38a9e8424516
	I1009 23:20:44.966127  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.966137  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.966305  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-921619","namespace":"kube-system","uid":"bb483c09-0ecb-447b-a339-2494340bda70","resourceVersion":"1215","creationTimestamp":"2023-10-09T23:13:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.167:8443","kubernetes.io/config.hash":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.mirror":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.seen":"2023-10-09T23:13:02.202089577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7616 chars]
	I1009 23:20:44.966762  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.966778  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.966788  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.966796  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.968643  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.968660  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.968671  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.968678  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.968683  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.968689  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.968697  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.968708  102501 round_trippers.go:580]     Audit-Id: 0882ad92-7872-4bae-a419-4526ad37647b
	I1009 23:20:44.968909  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.969230  102501 pod_ready.go:92] pod "kube-apiserver-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.969245  102501 pod_ready.go:81] duration metric: took 5.45575ms waiting for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.969254  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.969305  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-921619
	I1009 23:20:44.969313  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.969319  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.969325  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.971182  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.971201  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.971210  102501 round_trippers.go:580]     Audit-Id: eba575ff-0f79-4d59-aa23-831769e821e0
	I1009 23:20:44.971218  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.971226  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.971233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.971259  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.971265  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.971549  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-921619","namespace":"kube-system","uid":"e39c9043-b776-4ae0-b79a-528bf4fe7198","resourceVersion":"1221","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.mirror":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I1009 23:20:44.971939  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.971952  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.971959  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.971965  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.973731  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.973748  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.973756  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.973765  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.973774  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.973780  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.973786  102501 round_trippers.go:580]     Audit-Id: 6ecb7555-b0cc-4193-b823-4fdec82d35eb
	I1009 23:20:44.973791  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.973925  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.974200  102501 pod_ready.go:92] pod "kube-controller-manager-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.974214  102501 pod_ready.go:81] duration metric: took 4.949426ms waiting for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.974223  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.974272  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:44.974283  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.974293  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.974306  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.976081  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.976095  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.976102  102501 round_trippers.go:580]     Audit-Id: 0b021f28-7fc8-42d9-9a5b-9bff16c9f8f5
	I1009 23:20:44.976107  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.976112  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.976117  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.976122  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.976127  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.976244  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6nfdb","generateName":"kube-proxy-","namespace":"kube-system","uid":"5cbea5fb-98dd-4276-9b89-588271309935","resourceVersion":"1087","creationTimestamp":"2023-10-09T23:15:07Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1009 23:20:44.976607  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:44.976619  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.976626  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.976632  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.978313  102501 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1009 23:20:44.978322  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.978328  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.978334  102501 round_trippers.go:580]     Content-Length: 210
	I1009 23:20:44.978342  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.978351  102501 round_trippers.go:580]     Audit-Id: 187e7715-12d3-40c8-ba73-48e29062ebe2
	I1009 23:20:44.978363  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.978370  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.978376  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.978452  102501 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-921619-m03\" not found","reason":"NotFound","details":{"name":"multinode-921619-m03","kind":"nodes"},"code":404}
	I1009 23:20:44.978548  102501 pod_ready.go:97] node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:44.978561  102501 pod_ready.go:81] duration metric: took 4.332634ms waiting for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:44.978569  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:44.978575  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.150898  102501 request.go:629] Waited for 172.264891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:45.150973  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:45.150980  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.150993  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.151008  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.153692  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.153711  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.153718  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.153724  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.153729  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.153734  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.153745  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.153750  102501 round_trippers.go:580]     Audit-Id: 1c7d393b-15ca-416d-9939-5120ca21de4d
	I1009 23:20:45.153868  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlflz","generateName":"kube-proxy-","namespace":"kube-system","uid":"18003542-04f4-4330-8054-2e82da1f94f0","resourceVersion":"973","creationTimestamp":"2023-10-09T23:14:14Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:14:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I1009 23:20:45.351681  102501 request.go:629] Waited for 197.379442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:45.351746  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:45.351756  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.351769  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.351779  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.354658  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.354677  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.354684  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.354690  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.354699  102501 round_trippers.go:580]     Audit-Id: bf8f1e3c-f3b9-41f6-a533-853c4960c94f
	I1009 23:20:45.354714  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.354721  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.354738  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.354947  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619-m02","uid":"fccae5d8-c831-4dfb-91f9-523a6eb81706","resourceVersion":"992","creationTimestamp":"2023-10-09T23:18:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:18:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1009 23:20:45.355244  102501 pod_ready.go:92] pod "kube-proxy-qlflz" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:45.355260  102501 pod_ready.go:81] duration metric: took 376.677019ms waiting for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.355270  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.551819  102501 request.go:629] Waited for 196.475136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:45.551890  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:45.551901  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.551912  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.551921  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.555808  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:45.555840  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.555850  102501 round_trippers.go:580]     Audit-Id: 183db5e6-6cf3-467c-ab4d-01de8ad3bad8
	I1009 23:20:45.555858  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.555866  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.555873  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.555881  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.555890  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.556097  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t28g5","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339","resourceVersion":"1207","creationTimestamp":"2023-10-09T23:13:22Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I1009 23:20:45.750871  102501 request.go:629] Waited for 194.305299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:45.750950  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:45.750962  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.750974  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.750987  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.753850  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.753869  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.753879  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.753887  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.753894  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.753901  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.753908  102501 round_trippers.go:580]     Audit-Id: a58516a3-a80a-46d3-977c-3cd88f17b3d5
	I1009 23:20:45.753917  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.754077  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:45.754532  102501 pod_ready.go:92] pod "kube-proxy-t28g5" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:45.754554  102501 pod_ready.go:81] duration metric: took 399.276515ms waiting for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.754567  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.950958  102501 request.go:629] Waited for 196.305216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:45.951034  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:45.951041  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.951053  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.951065  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.954563  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:45.954586  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.954595  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.954603  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.954618  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.954626  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.954637  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.954647  102501 round_trippers.go:580]     Audit-Id: db7cc390-b3ad-4335-a1c4-ff6a07f55ba0
	I1009 23:20:45.954968  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-921619","namespace":"kube-system","uid":"9dc6b59f-e995-4b55-a755-8190f5c2d586","resourceVersion":"1219","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.mirror":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I1009 23:20:46.151772  102501 request.go:629] Waited for 196.378051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:46.151849  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:46.151857  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.151865  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.151871  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.154311  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:46.154326  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.154339  102501 round_trippers.go:580]     Audit-Id: b4312cf1-e832-4b2f-908d-937dc67188bf
	I1009 23:20:46.154352  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.154360  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.154368  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.154376  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.154387  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.154935  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:46.155246  102501 pod_ready.go:92] pod "kube-scheduler-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:46.155262  102501 pod_ready.go:81] duration metric: took 400.684725ms waiting for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:46.155276  102501 pod_ready.go:38] duration metric: took 12.728696491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:46.155306  102501 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:20:46.155360  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:46.168221  102501 command_runner.go:130] > 1551
	I1009 23:20:46.168250  102501 api_server.go:72] duration metric: took 15.125659604s to wait for apiserver process to appear ...
	I1009 23:20:46.168258  102501 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:20:46.168274  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:46.173103  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1009 23:20:46.173166  102501 round_trippers.go:463] GET https://192.168.39.167:8443/version
	I1009 23:20:46.173178  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.173188  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.173198  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.174086  102501 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 23:20:46.174102  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.174111  102501 round_trippers.go:580]     Audit-Id: 251714ce-1c94-42b9-a8ab-32715e6a22d6
	I1009 23:20:46.174120  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.174131  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.174150  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.174166  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.174175  102501 round_trippers.go:580]     Content-Length: 263
	I1009 23:20:46.174188  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.174212  102501 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1009 23:20:46.174262  102501 api_server.go:141] control plane version: v1.28.2
	I1009 23:20:46.174278  102501 api_server.go:131] duration metric: took 6.014ms to wait for apiserver health ...
	I1009 23:20:46.174288  102501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:20:46.351709  102501 request.go:629] Waited for 177.345783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.351843  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.351856  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.351868  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.351886  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.359372  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:46.359402  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.359410  102501 round_trippers.go:580]     Audit-Id: 7020ed98-f629-4cbe-b064-45c47376cfa8
	I1009 23:20:46.359415  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.359420  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.359425  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.359430  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.359436  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.361102  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1257"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83389 chars]
	I1009 23:20:46.364625  102501 system_pods.go:59] 12 kube-system pods found
	I1009 23:20:46.364658  102501 system_pods.go:61] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running
	I1009 23:20:46.364665  102501 system_pods.go:61] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running
	I1009 23:20:46.364671  102501 system_pods.go:61] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:46.364678  102501 system_pods.go:61] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running
	I1009 23:20:46.364685  102501 system_pods.go:61] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:46.364693  102501 system_pods.go:61] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running
	I1009 23:20:46.364700  102501 system_pods.go:61] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running
	I1009 23:20:46.364707  102501 system_pods.go:61] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:46.364720  102501 system_pods.go:61] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:46.364726  102501 system_pods.go:61] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running
	I1009 23:20:46.364736  102501 system_pods.go:61] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running
	I1009 23:20:46.364745  102501 system_pods.go:61] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running
	I1009 23:20:46.364755  102501 system_pods.go:74] duration metric: took 190.457725ms to wait for pod list to return data ...
	I1009 23:20:46.364767  102501 default_sa.go:34] waiting for default service account to be created ...
	I1009 23:20:46.551260  102501 request.go:629] Waited for 186.405273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:20:46.551350  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:20:46.551357  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.551376  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.551390  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.554613  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:46.554632  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.554641  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.554647  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.554653  102501 round_trippers.go:580]     Content-Length: 262
	I1009 23:20:46.554658  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.554664  102501 round_trippers.go:580]     Audit-Id: eb20c408-c19c-4657-b12f-b799b4d76f81
	I1009 23:20:46.554670  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.554679  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.554709  102501 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1258"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c91b1817-a383-4590-ae03-64162cee6fef","resourceVersion":"335","creationTimestamp":"2023-10-09T23:13:22Z"}}]}
	I1009 23:20:46.554933  102501 default_sa.go:45] found service account: "default"
	I1009 23:20:46.554954  102501 default_sa.go:55] duration metric: took 190.176623ms for default service account to be created ...
	I1009 23:20:46.554965  102501 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 23:20:46.751450  102501 request.go:629] Waited for 196.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.751524  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.751544  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.751558  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.751572  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.755611  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:46.755632  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.755639  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.755646  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.755654  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.755663  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.755672  102501 round_trippers.go:580]     Audit-Id: 0009ccfa-6d5b-4fda-83db-3c422b0352c2
	I1009 23:20:46.755680  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.757010  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1258"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83389 chars]
	I1009 23:20:46.759494  102501 system_pods.go:86] 12 kube-system pods found
	I1009 23:20:46.759514  102501 system_pods.go:89] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running
	I1009 23:20:46.759522  102501 system_pods.go:89] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running
	I1009 23:20:46.759526  102501 system_pods.go:89] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:46.759530  102501 system_pods.go:89] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running
	I1009 23:20:46.759535  102501 system_pods.go:89] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:46.759542  102501 system_pods.go:89] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running
	I1009 23:20:46.759553  102501 system_pods.go:89] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running
	I1009 23:20:46.759559  102501 system_pods.go:89] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:46.759565  102501 system_pods.go:89] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:46.759573  102501 system_pods.go:89] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running
	I1009 23:20:46.759578  102501 system_pods.go:89] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running
	I1009 23:20:46.759584  102501 system_pods.go:89] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running
	I1009 23:20:46.759590  102501 system_pods.go:126] duration metric: took 204.615857ms to wait for k8s-apps to be running ...
	I1009 23:20:46.759607  102501 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:20:46.759672  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:46.773659  102501 system_svc.go:56] duration metric: took 14.042695ms WaitForService to wait for kubelet.
	I1009 23:20:46.773696  102501 kubeadm.go:581] duration metric: took 15.731104662s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:20:46.773713  102501 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:46.951138  102501 request.go:629] Waited for 177.328875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:46.951197  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:46.951202  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.951210  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.951216  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.953890  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:46.953916  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.953926  102501 round_trippers.go:580]     Audit-Id: 4dca49e1-a499-4563-ab9b-cf42c45625d0
	I1009 23:20:46.953935  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.953942  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.953950  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.953958  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.953967  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.954192  102501 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1259"},"items":[{"metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9457 chars]
	I1009 23:20:46.954646  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:46.954664  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:46.954676  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:46.954680  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:46.954684  102501 node_conditions.go:105] duration metric: took 180.967837ms to run NodePressure ...
	I1009 23:20:46.954697  102501 start.go:228] waiting for startup goroutines ...
	I1009 23:20:46.954704  102501 start.go:233] waiting for cluster config update ...
	I1009 23:20:46.954710  102501 start.go:242] writing updated cluster config ...
	I1009 23:20:46.955165  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:46.955244  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:46.958391  102501 out.go:177] * Starting worker node multinode-921619-m02 in cluster multinode-921619
	I1009 23:20:46.959599  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:20:46.959618  102501 cache.go:57] Caching tarball of preloaded images
	I1009 23:20:46.959708  102501 preload.go:174] Found /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1009 23:20:46.959752  102501 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1009 23:20:46.959850  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:46.960012  102501 start.go:365] acquiring machines lock for multinode-921619-m02: {Name:mk4d06451f08f4d0dfbc191a7a07492b6e7c9c1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 23:20:46.960057  102501 start.go:369] acquired machines lock for "multinode-921619-m02" in 23.889µs
	I1009 23:20:46.960070  102501 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:20:46.960100  102501 fix.go:54] fixHost starting: m02
	I1009 23:20:46.960364  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:20:46.960385  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:20:46.975273  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I1009 23:20:46.975676  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:20:46.976086  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:20:46.976107  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:20:46.976465  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:20:46.976685  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:20:46.976840  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetState
	I1009 23:20:46.978216  102501 fix.go:102] recreateIfNeeded on multinode-921619-m02: state=Stopped err=<nil>
	I1009 23:20:46.978237  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	W1009 23:20:46.978399  102501 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:20:46.980512  102501 out.go:177] * Restarting existing kvm2 VM for "multinode-921619-m02" ...
	I1009 23:20:46.982010  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .Start
	I1009 23:20:46.982178  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring networks are active...
	I1009 23:20:46.983008  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring network default is active
	I1009 23:20:46.983363  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring network mk-multinode-921619 is active
	I1009 23:20:46.983694  102501 main.go:141] libmachine: (multinode-921619-m02) Getting domain xml...
	I1009 23:20:46.984368  102501 main.go:141] libmachine: (multinode-921619-m02) Creating domain...
	I1009 23:20:48.219485  102501 main.go:141] libmachine: (multinode-921619-m02) Waiting to get IP...
	I1009 23:20:48.220359  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.220751  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.220838  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.220751  102756 retry.go:31] will retry after 245.464617ms: waiting for machine to come up
	I1009 23:20:48.468314  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.469046  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.469082  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.469004  102756 retry.go:31] will retry after 350.744462ms: waiting for machine to come up
	I1009 23:20:48.821651  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.822041  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.822074  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.821996  102756 retry.go:31] will retry after 470.473303ms: waiting for machine to come up
	I1009 23:20:49.293577  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:49.294000  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:49.294027  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:49.293956  102756 retry.go:31] will retry after 528.498289ms: waiting for machine to come up
	I1009 23:20:49.823754  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:49.824205  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:49.824239  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:49.824149  102756 retry.go:31] will retry after 599.07991ms: waiting for machine to come up
	I1009 23:20:50.425102  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:50.425578  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:50.425608  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:50.425558  102756 retry.go:31] will retry after 943.690172ms: waiting for machine to come up
	I1009 23:20:51.370851  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:51.371291  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:51.371313  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:51.371246  102756 retry.go:31] will retry after 854.904577ms: waiting for machine to come up
	I1009 23:20:52.227662  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:52.228276  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:52.228306  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:52.228191  102756 retry.go:31] will retry after 917.09776ms: waiting for machine to come up
	I1009 23:20:53.146757  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:53.147192  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:53.147219  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:53.147154  102756 retry.go:31] will retry after 1.295311521s: waiting for machine to come up
	I1009 23:20:54.444793  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:54.445242  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:54.445268  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:54.445172  102756 retry.go:31] will retry after 1.672827257s: waiting for machine to come up
	I1009 23:20:56.120177  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:56.120699  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:56.120730  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:56.120643  102756 retry.go:31] will retry after 2.846317127s: waiting for machine to come up
	I1009 23:20:58.968533  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:58.968968  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:58.968998  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:58.968916  102756 retry.go:31] will retry after 2.625389438s: waiting for machine to come up
	I1009 23:21:01.597675  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:01.598117  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:21:01.598146  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:21:01.598064  102756 retry.go:31] will retry after 3.673921353s: waiting for machine to come up
	I1009 23:21:05.275970  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.276389  102501 main.go:141] libmachine: (multinode-921619-m02) Found IP for machine: 192.168.39.121
	I1009 23:21:05.276417  102501 main.go:141] libmachine: (multinode-921619-m02) Reserving static IP address...
	I1009 23:21:05.276435  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has current primary IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.276813  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "multinode-921619-m02", mac: "52:54:00:56:ca:45", ip: "192.168.39.121"} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.276874  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | skip adding static IP to network mk-multinode-921619 - found existing host DHCP lease matching {name: "multinode-921619-m02", mac: "52:54:00:56:ca:45", ip: "192.168.39.121"}
	I1009 23:21:05.276899  102501 main.go:141] libmachine: (multinode-921619-m02) Reserved static IP address: 192.168.39.121
	I1009 23:21:05.276916  102501 main.go:141] libmachine: (multinode-921619-m02) Waiting for SSH to be available...
	I1009 23:21:05.276935  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Getting to WaitForSSH function...
	I1009 23:21:05.278973  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.279297  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.279331  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.279458  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Using SSH client type: external
	I1009 23:21:05.279481  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa (-rw-------)
	I1009 23:21:05.279513  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:21:05.279533  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | About to run SSH command:
	I1009 23:21:05.279549  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | exit 0
	I1009 23:21:05.374318  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 23:21:05.374661  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetConfigRaw
	I1009 23:21:05.375254  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:05.377674  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.378063  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.378090  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.378311  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:21:05.378512  102501 machine.go:88] provisioning docker machine ...
	I1009 23:21:05.378529  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:05.378762  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.378938  102501 buildroot.go:166] provisioning hostname "multinode-921619-m02"
	I1009 23:21:05.378954  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.379121  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.381580  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.381916  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.381949  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.382097  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.382274  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.382429  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.382579  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.382753  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.383064  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.383078  102501 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-921619-m02 && echo "multinode-921619-m02" | sudo tee /etc/hostname
	I1009 23:21:05.526708  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-921619-m02
	
	I1009 23:21:05.526740  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.529479  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.529875  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.529901  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.530073  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.530273  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.530446  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.530597  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.530765  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.531082  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.531104  102501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-921619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-921619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-921619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:21:05.668451  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:21:05.668486  102501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17375-78415/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-78415/.minikube}
	I1009 23:21:05.668513  102501 buildroot.go:174] setting up certificates
	I1009 23:21:05.668525  102501 provision.go:83] configureAuth start
	I1009 23:21:05.668543  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.668856  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:05.672117  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.672492  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.672521  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.672621  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.674833  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.675258  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.675289  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.675379  102501 provision.go:138] copyHostCerts
	I1009 23:21:05.675418  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:21:05.675453  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem, removing ...
	I1009 23:21:05.675465  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:21:05.675534  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem (1123 bytes)
	I1009 23:21:05.675605  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:21:05.675625  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem, removing ...
	I1009 23:21:05.675631  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:21:05.675654  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem (1679 bytes)
	I1009 23:21:05.675696  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:21:05.675711  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem, removing ...
	I1009 23:21:05.675717  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:21:05.675738  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem (1082 bytes)
	I1009 23:21:05.675781  102501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem org=jenkins.multinode-921619-m02 san=[192.168.39.121 192.168.39.121 localhost 127.0.0.1 minikube multinode-921619-m02]
	I1009 23:21:05.775297  102501 provision.go:172] copyRemoteCerts
	I1009 23:21:05.775364  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:21:05.775399  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.777922  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.778216  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.778241  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.778421  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.778618  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.778759  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.778903  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:05.871513  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:21:05.871585  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:21:05.898494  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:21:05.898564  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 23:21:05.924733  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:21:05.924807  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1009 23:21:05.950405  102501 provision.go:86] duration metric: configureAuth took 281.86296ms
	I1009 23:21:05.950428  102501 buildroot.go:189] setting minikube options for container-runtime
	I1009 23:21:05.950675  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:21:05.950700  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:05.950985  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.953474  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.953818  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.953848  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.954012  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.954222  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.954392  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.954540  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.954775  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.955252  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.955270  102501 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 23:21:06.084257  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 23:21:06.084284  102501 buildroot.go:70] root file system type: tmpfs
	I1009 23:21:06.084443  102501 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 23:21:06.084467  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:06.087304  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.087702  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:06.087722  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.087930  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:06.088129  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.088329  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.088489  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:06.088630  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:06.088929  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:06.088987  102501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.167"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 23:21:06.235570  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.167
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 23:21:06.235608  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:06.238489  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.238958  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:06.238980  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.239186  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:06.239383  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.239528  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.239660  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:06.239802  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:06.240139  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:06.240165  102501 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 23:21:07.140108  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 23:21:07.140142  102501 machine.go:91] provisioned docker machine in 1.761612342s
	I1009 23:21:07.140154  102501 start.go:300] post-start starting for "multinode-921619-m02" (driver="kvm2")
	I1009 23:21:07.140165  102501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:21:07.140181  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.140568  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:21:07.140608  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.143238  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.143593  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.143628  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.143735  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.143932  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.144139  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.144298  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.241724  102501 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:21:07.246026  102501 command_runner.go:130] > NAME=Buildroot
	I1009 23:21:07.246048  102501 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1009 23:21:07.246055  102501 command_runner.go:130] > ID=buildroot
	I1009 23:21:07.246064  102501 command_runner.go:130] > VERSION_ID=2021.02.12
	I1009 23:21:07.246072  102501 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1009 23:21:07.246215  102501 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 23:21:07.246237  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/addons for local assets ...
	I1009 23:21:07.246303  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/files for local assets ...
	I1009 23:21:07.246394  102501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> 856012.pem in /etc/ssl/certs
	I1009 23:21:07.246408  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /etc/ssl/certs/856012.pem
	I1009 23:21:07.246528  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:21:07.256350  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:21:07.280688  102501 start.go:303] post-start completed in 140.517748ms
	I1009 23:21:07.280709  102501 fix.go:56] fixHost completed within 20.320607071s
	I1009 23:21:07.280736  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.283160  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.283506  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.283538  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.283648  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.283836  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.284052  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.284222  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.284416  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:07.284868  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:07.284885  102501 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 23:21:07.415288  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696893667.361858032
	
	I1009 23:21:07.415308  102501 fix.go:206] guest clock: 1696893667.361858032
	I1009 23:21:07.415323  102501 fix.go:219] Guest: 2023-10-09 23:21:07.361858032 +0000 UTC Remote: 2023-10-09 23:21:07.280714025 +0000 UTC m=+84.775359462 (delta=81.144007ms)
	I1009 23:21:07.415338  102501 fix.go:190] guest clock delta is within tolerance: 81.144007ms
	I1009 23:21:07.415343  102501 start.go:83] releasing machines lock for "multinode-921619-m02", held for 20.45527802s
	I1009 23:21:07.415385  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.415661  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:07.418237  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.418631  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.418664  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.420859  102501 out.go:177] * Found network options:
	I1009 23:21:07.422414  102501 out.go:177]   - NO_PROXY=192.168.39.167
	W1009 23:21:07.423800  102501 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 23:21:07.423827  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424371  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424563  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424650  102501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:21:07.424698  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	W1009 23:21:07.424799  102501 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 23:21:07.424880  102501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:21:07.424909  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.427387  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427667  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427774  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.427799  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427981  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.428060  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.428088  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.428155  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.428260  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.428362  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.428427  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.428506  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.428552  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.428701  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.544604  102501 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:21:07.545508  102501 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 23:21:07.545555  102501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 23:21:07.545624  102501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:21:07.562734  102501 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1009 23:21:07.562776  102501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:21:07.562793  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:21:07.562952  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:21:07.579411  102501 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1009 23:21:07.579863  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 23:21:07.590123  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 23:21:07.600552  102501 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 23:21:07.600609  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 23:21:07.610642  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:21:07.620936  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 23:21:07.631499  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:21:07.641667  102501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:21:07.651948  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 23:21:07.662213  102501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:21:07.671249  102501 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:21:07.671396  102501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:21:07.681591  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:07.782428  102501 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 23:21:07.803946  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:21:07.804035  102501 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 23:21:07.817049  102501 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1009 23:21:07.818004  102501 command_runner.go:130] > [Unit]
	I1009 23:21:07.818024  102501 command_runner.go:130] > Description=Docker Application Container Engine
	I1009 23:21:07.818030  102501 command_runner.go:130] > Documentation=https://docs.docker.com
	I1009 23:21:07.818035  102501 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1009 23:21:07.818041  102501 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1009 23:21:07.818046  102501 command_runner.go:130] > StartLimitBurst=3
	I1009 23:21:07.818050  102501 command_runner.go:130] > StartLimitIntervalSec=60
	I1009 23:21:07.818054  102501 command_runner.go:130] > [Service]
	I1009 23:21:07.818059  102501 command_runner.go:130] > Type=notify
	I1009 23:21:07.818063  102501 command_runner.go:130] > Restart=on-failure
	I1009 23:21:07.818071  102501 command_runner.go:130] > Environment=NO_PROXY=192.168.39.167
	I1009 23:21:07.818083  102501 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1009 23:21:07.818097  102501 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1009 23:21:07.818103  102501 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1009 23:21:07.818109  102501 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1009 23:21:07.818124  102501 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1009 23:21:07.818135  102501 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1009 23:21:07.818152  102501 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1009 23:21:07.818172  102501 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1009 23:21:07.818182  102501 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1009 23:21:07.818187  102501 command_runner.go:130] > ExecStart=
	I1009 23:21:07.818205  102501 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1009 23:21:07.818216  102501 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1009 23:21:07.818226  102501 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1009 23:21:07.818303  102501 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1009 23:21:07.818320  102501 command_runner.go:130] > LimitNOFILE=infinity
	I1009 23:21:07.818327  102501 command_runner.go:130] > LimitNPROC=infinity
	I1009 23:21:07.818334  102501 command_runner.go:130] > LimitCORE=infinity
	I1009 23:21:07.818348  102501 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1009 23:21:07.818360  102501 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1009 23:21:07.818371  102501 command_runner.go:130] > TasksMax=infinity
	I1009 23:21:07.818379  102501 command_runner.go:130] > TimeoutStartSec=0
	I1009 23:21:07.818392  102501 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1009 23:21:07.818401  102501 command_runner.go:130] > Delegate=yes
	I1009 23:21:07.818410  102501 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1009 23:21:07.818425  102501 command_runner.go:130] > KillMode=process
	I1009 23:21:07.818436  102501 command_runner.go:130] > [Install]
	I1009 23:21:07.818444  102501 command_runner.go:130] > WantedBy=multi-user.target
	I1009 23:21:07.818713  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:21:07.831420  102501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:21:07.847570  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:21:07.860576  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:21:07.874179  102501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 23:21:07.910629  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:21:07.922800  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:21:07.940164  102501 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1009 23:21:07.940622  102501 ssh_runner.go:195] Run: which cri-dockerd
	I1009 23:21:07.944358  102501 command_runner.go:130] > /usr/bin/cri-dockerd
	I1009 23:21:07.944465  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 23:21:07.953761  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 23:21:07.970298  102501 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 23:21:08.092815  102501 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 23:21:08.212602  102501 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 23:21:08.212638  102501 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 23:21:08.229521  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:08.331539  102501 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:21:09.763543  102501 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.431962056s)
	I1009 23:21:09.763613  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:21:09.865178  102501 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 23:21:09.980252  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:21:10.091712  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:10.198034  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 23:21:10.212997  102501 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1009 23:21:10.215690  102501 out.go:177] 
	W1009 23:21:10.217058  102501 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1009 23:21:10.217073  102501 out.go:239] * 
	* 
	W1009 23:21:10.217948  102501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 23:21:10.219795  102501 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-921619 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-921619 -n multinode-921619
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-921619 logs -n 25: (1.227576125s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-921619 cp multinode-921619-m02:/home/docker/cp-test.txt                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619:/home/docker/cp-test_multinode-921619-m02_multinode-921619.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n multinode-921619 sudo cat                                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | /home/docker/cp-test_multinode-921619-m02_multinode-921619.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-921619 cp multinode-921619-m02:/home/docker/cp-test.txt                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03:/home/docker/cp-test_multinode-921619-m02_multinode-921619-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n multinode-921619-m03 sudo cat                                   | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | /home/docker/cp-test_multinode-921619-m02_multinode-921619-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-921619 cp testdata/cp-test.txt                                                | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2293928982/001/cp-test_multinode-921619-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619:/home/docker/cp-test_multinode-921619-m03_multinode-921619.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n multinode-921619 sudo cat                                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | /home/docker/cp-test_multinode-921619-m03_multinode-921619.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt                       | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m02:/home/docker/cp-test_multinode-921619-m03_multinode-921619-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n                                                                 | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | multinode-921619-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-921619 ssh -n multinode-921619-m02 sudo cat                                   | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	|         | /home/docker/cp-test_multinode-921619-m03_multinode-921619-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-921619 node stop m03                                                          | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:15 UTC |
	| node    | multinode-921619 node start                                                             | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:15 UTC | 09 Oct 23 23:16 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-921619                                                                | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:16 UTC |                     |
	| stop    | -p multinode-921619                                                                     | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:16 UTC | 09 Oct 23 23:16 UTC |
	| start   | -p multinode-921619                                                                     | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:16 UTC | 09 Oct 23 23:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-921619                                                                | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:19 UTC |                     |
	| node    | multinode-921619 node delete                                                            | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:19 UTC | 09 Oct 23 23:19 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-921619 stop                                                                   | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:19 UTC | 09 Oct 23 23:19 UTC |
	| start   | -p multinode-921619                                                                     | multinode-921619 | jenkins | v1.31.2 | 09 Oct 23 23:19 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 23:19:42
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 23:19:42.554319  102501 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:19:42.554438  102501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.554447  102501 out.go:309] Setting ErrFile to fd 2...
	I1009 23:19:42.554452  102501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.554694  102501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:19:42.555224  102501 out.go:303] Setting JSON to false
	I1009 23:19:42.556124  102501 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10930,"bootTime":1696882653,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 23:19:42.556185  102501 start.go:138] virtualization: kvm guest
	I1009 23:19:42.558589  102501 out.go:177] * [multinode-921619] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 23:19:42.560021  102501 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:19:42.561515  102501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:19:42.560032  102501 notify.go:220] Checking for updates...
	I1009 23:19:42.564258  102501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:19:42.565674  102501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:19:42.567066  102501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 23:19:42.568463  102501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:19:42.570393  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:19:42.570824  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.570907  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.585661  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40847
	I1009 23:19:42.586059  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.586668  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.586693  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.587078  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.587290  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.587624  102501 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:19:42.588013  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.588056  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.601943  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I1009 23:19:42.602285  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.602765  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.602786  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.603047  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.603250  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.637234  102501 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 23:19:42.638613  102501 start.go:298] selected driver: kvm2
	I1009 23:19:42.638626  102501 start.go:902] validating driver "kvm2" against &{Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:19:42.638763  102501 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:19:42.639070  102501 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:19:42.639133  102501 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17375-78415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 23:19:42.653262  102501 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1009 23:19:42.654004  102501 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:19:42.654068  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:19:42.654078  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:19:42.654090  102501 start_flags.go:323] config:
	{Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:fal
se nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:19:42.654293  102501 iso.go:125] acquiring lock: {Name:mk8f0545fb1f7801479f5eb65fbe7d8f303a99cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:19:42.656913  102501 out.go:177] * Starting control plane node multinode-921619 in cluster multinode-921619
	I1009 23:19:42.658142  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:19:42.658176  102501 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1009 23:19:42.658184  102501 cache.go:57] Caching tarball of preloaded images
	I1009 23:19:42.658274  102501 preload.go:174] Found /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1009 23:19:42.658285  102501 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1009 23:19:42.658393  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:19:42.658592  102501 start.go:365] acquiring machines lock for multinode-921619: {Name:mk4d06451f08f4d0dfbc191a7a07492b6e7c9c1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 23:19:42.658633  102501 start.go:369] acquired machines lock for "multinode-921619" in 22.028µs
	I1009 23:19:42.658645  102501 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:19:42.658652  102501 fix.go:54] fixHost starting: 
	I1009 23:19:42.658915  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.658948  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.672648  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I1009 23:19:42.673039  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.673480  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.673502  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.673799  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.673993  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:19:42.674141  102501 main.go:141] libmachine: (multinode-921619) Calling .GetState
	I1009 23:19:42.676000  102501 fix.go:102] recreateIfNeeded on multinode-921619: state=Stopped err=<nil>
	I1009 23:19:42.676021  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	W1009 23:19:42.676184  102501 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:19:42.678714  102501 out.go:177] * Restarting existing kvm2 VM for "multinode-921619" ...
	I1009 23:19:42.680025  102501 main.go:141] libmachine: (multinode-921619) Calling .Start
	I1009 23:19:42.680203  102501 main.go:141] libmachine: (multinode-921619) Ensuring networks are active...
	I1009 23:19:42.681001  102501 main.go:141] libmachine: (multinode-921619) Ensuring network default is active
	I1009 23:19:42.681449  102501 main.go:141] libmachine: (multinode-921619) Ensuring network mk-multinode-921619 is active
	I1009 23:19:42.681823  102501 main.go:141] libmachine: (multinode-921619) Getting domain xml...
	I1009 23:19:42.682587  102501 main.go:141] libmachine: (multinode-921619) Creating domain...
	I1009 23:19:43.899709  102501 main.go:141] libmachine: (multinode-921619) Waiting to get IP...
	I1009 23:19:43.900830  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:43.901318  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:43.901439  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:43.901326  102536 retry.go:31] will retry after 237.405822ms: waiting for machine to come up
	I1009 23:19:44.140909  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.141369  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.141395  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.141285  102536 retry.go:31] will retry after 330.20986ms: waiting for machine to come up
	I1009 23:19:44.472830  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.473397  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.473498  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.473334  102536 retry.go:31] will retry after 424.010882ms: waiting for machine to come up
	I1009 23:19:44.898955  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:44.899336  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:44.899367  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:44.899285  102536 retry.go:31] will retry after 485.273155ms: waiting for machine to come up
	I1009 23:19:45.386042  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:45.386267  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:45.386298  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:45.386223  102536 retry.go:31] will retry after 587.068913ms: waiting for machine to come up
	I1009 23:19:45.975115  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:45.975524  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:45.975555  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:45.975475  102536 retry.go:31] will retry after 594.885578ms: waiting for machine to come up
	I1009 23:19:46.572228  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:46.572710  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:46.572732  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:46.572648  102536 retry.go:31] will retry after 896.005691ms: waiting for machine to come up
	I1009 23:19:47.470886  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:47.471343  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:47.471370  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:47.471299  102536 retry.go:31] will retry after 1.167441753s: waiting for machine to come up
	I1009 23:19:48.640221  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:48.640797  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:48.640828  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:48.640750  102536 retry.go:31] will retry after 1.388777428s: waiting for machine to come up
	I1009 23:19:50.031274  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:50.031649  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:50.031693  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:50.031576  102536 retry.go:31] will retry after 1.747281603s: waiting for machine to come up
	I1009 23:19:51.781705  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:51.782185  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:51.782218  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:51.782122  102536 retry.go:31] will retry after 2.469919209s: waiting for machine to come up
	I1009 23:19:54.253897  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:54.254261  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:54.254291  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:54.254238  102536 retry.go:31] will retry after 2.229572497s: waiting for machine to come up
	I1009 23:19:56.486729  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:56.487104  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:56.487122  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:56.487070  102536 retry.go:31] will retry after 3.115495801s: waiting for machine to come up
	I1009 23:19:59.604928  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:19:59.605366  102501 main.go:141] libmachine: (multinode-921619) DBG | unable to find current IP address of domain multinode-921619 in network mk-multinode-921619
	I1009 23:19:59.605390  102501 main.go:141] libmachine: (multinode-921619) DBG | I1009 23:19:59.605317  102536 retry.go:31] will retry after 3.442831938s: waiting for machine to come up
	I1009 23:20:03.049586  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.050068  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has current primary IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.050095  102501 main.go:141] libmachine: (multinode-921619) Found IP for machine: 192.168.39.167
	I1009 23:20:03.050107  102501 main.go:141] libmachine: (multinode-921619) Reserving static IP address...
	I1009 23:20:03.050537  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "multinode-921619", mac: "52:54:00:65:2b:27", ip: "192.168.39.167"} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.050566  102501 main.go:141] libmachine: (multinode-921619) Reserved static IP address: 192.168.39.167
	I1009 23:20:03.050580  102501 main.go:141] libmachine: (multinode-921619) DBG | skip adding static IP to network mk-multinode-921619 - found existing host DHCP lease matching {name: "multinode-921619", mac: "52:54:00:65:2b:27", ip: "192.168.39.167"}
	I1009 23:20:03.050595  102501 main.go:141] libmachine: (multinode-921619) DBG | Getting to WaitForSSH function...
	I1009 23:20:03.050616  102501 main.go:141] libmachine: (multinode-921619) Waiting for SSH to be available...
	I1009 23:20:03.052668  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.052975  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.052997  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.053105  102501 main.go:141] libmachine: (multinode-921619) DBG | Using SSH client type: external
	I1009 23:20:03.053133  102501 main.go:141] libmachine: (multinode-921619) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa (-rw-------)
	I1009 23:20:03.053153  102501 main.go:141] libmachine: (multinode-921619) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:20:03.053164  102501 main.go:141] libmachine: (multinode-921619) DBG | About to run SSH command:
	I1009 23:20:03.053179  102501 main.go:141] libmachine: (multinode-921619) DBG | exit 0
	I1009 23:20:03.142014  102501 main.go:141] libmachine: (multinode-921619) DBG | SSH cmd err, output: <nil>: 
	I1009 23:20:03.142377  102501 main.go:141] libmachine: (multinode-921619) Calling .GetConfigRaw
	I1009 23:20:03.143029  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:03.145626  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.145990  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.146024  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.146294  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:03.146512  102501 machine.go:88] provisioning docker machine ...
	I1009 23:20:03.146531  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:03.146757  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.146915  102501 buildroot.go:166] provisioning hostname "multinode-921619"
	I1009 23:20:03.146930  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.147080  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.149243  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.149566  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.149606  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.149676  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.149854  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.150025  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.150145  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.150273  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.150618  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.150629  102501 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-921619 && echo "multinode-921619" | sudo tee /etc/hostname
	I1009 23:20:03.277603  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-921619
	
	I1009 23:20:03.277638  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.280400  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.280747  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.280790  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.280946  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.281156  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.281346  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.281498  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.281671  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.281998  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.282032  102501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-921619' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-921619/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-921619' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:20:03.405672  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:20:03.405707  102501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17375-78415/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-78415/.minikube}
	I1009 23:20:03.405749  102501 buildroot.go:174] setting up certificates
	I1009 23:20:03.405760  102501 provision.go:83] configureAuth start
	I1009 23:20:03.405779  102501 main.go:141] libmachine: (multinode-921619) Calling .GetMachineName
	I1009 23:20:03.406085  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:03.408851  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.409320  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.409345  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.409568  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.411602  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.411933  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.411958  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.412028  102501 provision.go:138] copyHostCerts
	I1009 23:20:03.412072  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:20:03.412119  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem, removing ...
	I1009 23:20:03.412133  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:20:03.412212  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem (1082 bytes)
	I1009 23:20:03.412334  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:20:03.412371  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem, removing ...
	I1009 23:20:03.412381  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:20:03.412422  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem (1123 bytes)
	I1009 23:20:03.412526  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:20:03.412554  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem, removing ...
	I1009 23:20:03.412566  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:20:03.412601  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem (1679 bytes)
	I1009 23:20:03.412678  102501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem org=jenkins.multinode-921619 san=[192.168.39.167 192.168.39.167 localhost 127.0.0.1 minikube multinode-921619]
	I1009 23:20:03.559867  102501 provision.go:172] copyRemoteCerts
	I1009 23:20:03.559927  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:20:03.559953  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.563117  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.563509  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.563535  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.563718  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.563915  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.564079  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.564215  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:03.656572  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:20:03.656659  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:20:03.678392  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:20:03.678450  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 23:20:03.700167  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:20:03.700229  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 23:20:03.721303  102501 provision.go:86] duration metric: configureAuth took 315.526073ms
	I1009 23:20:03.721327  102501 buildroot.go:189] setting minikube options for container-runtime
	I1009 23:20:03.721538  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:03.721562  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:03.721848  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.724544  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.724947  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.724981  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.725099  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.725327  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.725477  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.725594  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.725754  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.726050  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.726062  102501 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 23:20:03.843926  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 23:20:03.843954  102501 buildroot.go:70] root file system type: tmpfs
	I1009 23:20:03.844107  102501 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 23:20:03.844149  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.847133  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.847492  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.847529  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.847708  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.847909  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.848085  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.848230  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.848385  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.848727  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.848791  102501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 23:20:03.980374  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 23:20:03.980448  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:03.983127  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.983489  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:03.983522  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:03.983701  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:03.983874  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.984045  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:03.984160  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:03.984295  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:03.984673  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:03.984693  102501 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 23:20:04.899192  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 23:20:04.899224  102501 machine.go:91] provisioned docker machine in 1.752695342s
	I1009 23:20:04.899245  102501 start.go:300] post-start starting for "multinode-921619" (driver="kvm2")
	I1009 23:20:04.899256  102501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:20:04.899277  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:04.899612  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:20:04.899653  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:04.902154  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:04.902555  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:04.902583  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:04.902771  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:04.902952  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:04.903125  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:04.903226  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:04.992146  102501 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:20:04.996385  102501 command_runner.go:130] > NAME=Buildroot
	I1009 23:20:04.996406  102501 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1009 23:20:04.996421  102501 command_runner.go:130] > ID=buildroot
	I1009 23:20:04.996429  102501 command_runner.go:130] > VERSION_ID=2021.02.12
	I1009 23:20:04.996437  102501 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1009 23:20:04.996508  102501 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 23:20:04.996532  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/addons for local assets ...
	I1009 23:20:04.996599  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/files for local assets ...
	I1009 23:20:04.996698  102501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> 856012.pem in /etc/ssl/certs
	I1009 23:20:04.996711  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /etc/ssl/certs/856012.pem
	I1009 23:20:04.996824  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:20:05.004858  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:20:05.027499  102501 start.go:303] post-start completed in 128.238762ms
	I1009 23:20:05.027519  102501 fix.go:56] fixHost completed within 22.368865879s
	I1009 23:20:05.027539  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.030028  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.030398  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.030431  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.030597  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.030795  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.030927  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.031051  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.031206  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:05.031517  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I1009 23:20:05.031533  102501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1009 23:20:05.147008  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696893605.096589079
	
	I1009 23:20:05.147032  102501 fix.go:206] guest clock: 1696893605.096589079
	I1009 23:20:05.147040  102501 fix.go:219] Guest: 2023-10-09 23:20:05.096589079 +0000 UTC Remote: 2023-10-09 23:20:05.027522172 +0000 UTC m=+22.522167554 (delta=69.066907ms)
	I1009 23:20:05.147063  102501 fix.go:190] guest clock delta is within tolerance: 69.066907ms
	I1009 23:20:05.147070  102501 start.go:83] releasing machines lock for "multinode-921619", held for 22.488427405s
	I1009 23:20:05.147105  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.147388  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:05.149888  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.150249  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.150280  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.150485  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.150954  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.151101  102501 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:20:05.151199  102501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:20:05.151238  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.151321  102501 ssh_runner.go:195] Run: cat /version.json
	I1009 23:20:05.151346  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:20:05.154023  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154169  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154415  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.154445  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154490  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:05.154528  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:05.154614  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.154725  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:20:05.154810  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.154907  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:20:05.154984  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.155001  102501 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:20:05.155094  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:05.155192  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:20:05.260508  102501 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:20:05.261108  102501 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1009 23:20:05.261272  102501 ssh_runner.go:195] Run: systemctl --version
	I1009 23:20:05.266667  102501 command_runner.go:130] > systemd 247 (247)
	I1009 23:20:05.266703  102501 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1009 23:20:05.266772  102501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:20:05.271860  102501 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 23:20:05.271969  102501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 23:20:05.272037  102501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:20:05.285541  102501 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1009 23:20:05.285571  102501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:20:05.285583  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:20:05.285708  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:20:05.301938  102501 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1009 23:20:05.302014  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 23:20:05.311927  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 23:20:05.321797  102501 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 23:20:05.321864  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 23:20:05.331819  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:20:05.341858  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 23:20:05.351719  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:20:05.361423  102501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:20:05.371820  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 23:20:05.381532  102501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:20:05.390418  102501 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:20:05.390496  102501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:20:05.399122  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:05.500931  102501 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 23:20:05.519011  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:20:05.519094  102501 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 23:20:05.531353  102501 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1009 23:20:05.531368  102501 command_runner.go:130] > [Unit]
	I1009 23:20:05.531374  102501 command_runner.go:130] > Description=Docker Application Container Engine
	I1009 23:20:05.531379  102501 command_runner.go:130] > Documentation=https://docs.docker.com
	I1009 23:20:05.531385  102501 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1009 23:20:05.531390  102501 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1009 23:20:05.531403  102501 command_runner.go:130] > StartLimitBurst=3
	I1009 23:20:05.531408  102501 command_runner.go:130] > StartLimitIntervalSec=60
	I1009 23:20:05.531412  102501 command_runner.go:130] > [Service]
	I1009 23:20:05.531416  102501 command_runner.go:130] > Type=notify
	I1009 23:20:05.531424  102501 command_runner.go:130] > Restart=on-failure
	I1009 23:20:05.531439  102501 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1009 23:20:05.531460  102501 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1009 23:20:05.531472  102501 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1009 23:20:05.531486  102501 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1009 23:20:05.531497  102501 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1009 23:20:05.531510  102501 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1009 23:20:05.531523  102501 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1009 23:20:05.531543  102501 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1009 23:20:05.531558  102501 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1009 23:20:05.531564  102501 command_runner.go:130] > ExecStart=
	I1009 23:20:05.531590  102501 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1009 23:20:05.531602  102501 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1009 23:20:05.531609  102501 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1009 23:20:05.531615  102501 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1009 23:20:05.531619  102501 command_runner.go:130] > LimitNOFILE=infinity
	I1009 23:20:05.531623  102501 command_runner.go:130] > LimitNPROC=infinity
	I1009 23:20:05.531627  102501 command_runner.go:130] > LimitCORE=infinity
	I1009 23:20:05.531632  102501 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1009 23:20:05.531644  102501 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1009 23:20:05.531651  102501 command_runner.go:130] > TasksMax=infinity
	I1009 23:20:05.531658  102501 command_runner.go:130] > TimeoutStartSec=0
	I1009 23:20:05.531670  102501 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1009 23:20:05.531680  102501 command_runner.go:130] > Delegate=yes
	I1009 23:20:05.531690  102501 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1009 23:20:05.531699  102501 command_runner.go:130] > KillMode=process
	I1009 23:20:05.531704  102501 command_runner.go:130] > [Install]
	I1009 23:20:05.531716  102501 command_runner.go:130] > WantedBy=multi-user.target
	I1009 23:20:05.531793  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:20:05.552369  102501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:20:05.569205  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:20:05.580862  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:20:05.592206  102501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 23:20:05.622389  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:20:05.634651  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:20:05.651364  102501 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1009 23:20:05.651450  102501 ssh_runner.go:195] Run: which cri-dockerd
	I1009 23:20:05.654957  102501 command_runner.go:130] > /usr/bin/cri-dockerd
	I1009 23:20:05.655078  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 23:20:05.663913  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 23:20:05.679609  102501 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 23:20:05.782471  102501 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 23:20:05.890512  102501 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 23:20:05.890657  102501 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 23:20:05.907250  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:06.008433  102501 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:20:07.512425  102501 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.503952293s)
	I1009 23:20:07.512500  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:20:07.624629  102501 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 23:20:07.733434  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:20:07.845901  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:07.958297  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 23:20:07.974336  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:08.079485  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1009 23:20:08.157244  102501 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 23:20:08.157325  102501 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 23:20:08.163200  102501 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1009 23:20:08.163230  102501 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 23:20:08.163241  102501 command_runner.go:130] > Device: 16h/22d	Inode: 894         Links: 1
	I1009 23:20:08.163250  102501 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1009 23:20:08.163257  102501 command_runner.go:130] > Access: 2023-10-09 23:20:08.043785686 +0000
	I1009 23:20:08.163261  102501 command_runner.go:130] > Modify: 2023-10-09 23:20:08.043785686 +0000
	I1009 23:20:08.163267  102501 command_runner.go:130] > Change: 2023-10-09 23:20:08.045785686 +0000
	I1009 23:20:08.163270  102501 command_runner.go:130] >  Birth: -
	I1009 23:20:08.163644  102501 start.go:540] Will wait 60s for crictl version
	I1009 23:20:08.163696  102501 ssh_runner.go:195] Run: which crictl
	I1009 23:20:08.168305  102501 command_runner.go:130] > /usr/bin/crictl
	I1009 23:20:08.168476  102501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:20:08.220203  102501 command_runner.go:130] > Version:  0.1.0
	I1009 23:20:08.220225  102501 command_runner.go:130] > RuntimeName:  docker
	I1009 23:20:08.220230  102501 command_runner.go:130] > RuntimeVersion:  24.0.6
	I1009 23:20:08.220235  102501 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 23:20:08.221898  102501 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1009 23:20:08.221968  102501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:20:08.249007  102501 command_runner.go:130] > 24.0.6
	I1009 23:20:08.250211  102501 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:20:08.275194  102501 command_runner.go:130] > 24.0.6
	I1009 23:20:08.277983  102501 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1009 23:20:08.278046  102501 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:20:08.280705  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:08.281134  102501 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:19:54 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:20:08.281172  102501 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:20:08.281378  102501 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 23:20:08.285404  102501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:08.298585  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:20:08.298643  102501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:20:08.316856  102501 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1009 23:20:08.316880  102501 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1009 23:20:08.316889  102501 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1009 23:20:08.316898  102501 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1009 23:20:08.316906  102501 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1009 23:20:08.316913  102501 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1009 23:20:08.316922  102501 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1009 23:20:08.316933  102501 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1009 23:20:08.316943  102501 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:20:08.316950  102501 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1009 23:20:08.317734  102501 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1009 23:20:08.317760  102501 docker.go:619] Images already preloaded, skipping extraction
	I1009 23:20:08.317824  102501 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:20:08.337694  102501 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.2
	I1009 23:20:08.337721  102501 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.2
	I1009 23:20:08.337730  102501 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.2
	I1009 23:20:08.337739  102501 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.2
	I1009 23:20:08.337746  102501 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I1009 23:20:08.337754  102501 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I1009 23:20:08.337763  102501 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I1009 23:20:08.337770  102501 command_runner.go:130] > registry.k8s.io/pause:3.9
	I1009 23:20:08.337778  102501 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:20:08.337790  102501 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1009 23:20:08.337829  102501 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1009 23:20:08.337851  102501 cache_images.go:84] Images are preloaded, skipping loading
	I1009 23:20:08.337910  102501 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 23:20:08.363645  102501 command_runner.go:130] > cgroupfs
	I1009 23:20:08.364850  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:20:08.364871  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:08.364897  102501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:20:08.364933  102501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.167 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-921619 NodeName:multinode-921619 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:20:08.365112  102501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-921619"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:20:08.365201  102501 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-921619 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:20:08.365259  102501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:20:08.374426  102501 command_runner.go:130] > kubeadm
	I1009 23:20:08.374443  102501 command_runner.go:130] > kubectl
	I1009 23:20:08.374448  102501 command_runner.go:130] > kubelet
	I1009 23:20:08.374662  102501 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:20:08.374745  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:20:08.382881  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1009 23:20:08.398895  102501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:20:08.414664  102501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1009 23:20:08.431261  102501 ssh_runner.go:195] Run: grep 192.168.39.167	control-plane.minikube.internal$ /etc/hosts
	I1009 23:20:08.434954  102501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:08.446965  102501 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619 for IP: 192.168.39.167
	I1009 23:20:08.446999  102501 certs.go:190] acquiring lock for shared ca certs: {Name:mke2558e764208d6103dc9316e1963152570f27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:08.447139  102501 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key
	I1009 23:20:08.447183  102501 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key
	I1009 23:20:08.447255  102501 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key
	I1009 23:20:08.447302  102501 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key.5fe8596d
	I1009 23:20:08.447343  102501 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key
	I1009 23:20:08.447354  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 23:20:08.447367  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 23:20:08.447380  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 23:20:08.447392  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 23:20:08.447411  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 23:20:08.447424  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 23:20:08.447435  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 23:20:08.447447  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 23:20:08.447493  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem (1338 bytes)
	W1009 23:20:08.447522  102501 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601_empty.pem, impossibly tiny 0 bytes
	I1009 23:20:08.447532  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:20:08.447557  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem (1082 bytes)
	I1009 23:20:08.447579  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:20:08.447600  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem (1679 bytes)
	I1009 23:20:08.447640  102501 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:20:08.447676  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.447690  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.447702  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem -> /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.448411  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:20:08.471339  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 23:20:08.495014  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:20:08.518293  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:20:08.541374  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:20:08.564198  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 23:20:08.587349  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:20:08.610562  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 23:20:08.633178  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /usr/share/ca-certificates/856012.pem (1708 bytes)
	I1009 23:20:08.655844  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:20:08.678896  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem --> /usr/share/ca-certificates/85601.pem (1338 bytes)
	I1009 23:20:08.701523  102501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:20:08.717647  102501 ssh_runner.go:195] Run: openssl version
	I1009 23:20:08.722843  102501 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1009 23:20:08.723202  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/856012.pem && ln -fs /usr/share/ca-certificates/856012.pem /etc/ssl/certs/856012.pem"
	I1009 23:20:08.732707  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737331  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 23:00 /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737356  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:00 /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.737395  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/856012.pem
	I1009 23:20:08.742994  102501 command_runner.go:130] > 3ec20f2e
	I1009 23:20:08.743060  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/856012.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:20:08.752377  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:20:08.761505  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.765802  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.765993  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.766049  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:08.771239  102501 command_runner.go:130] > b5213941
	I1009 23:20:08.771294  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:20:08.780391  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/85601.pem && ln -fs /usr/share/ca-certificates/85601.pem /etc/ssl/certs/85601.pem"
	I1009 23:20:08.789372  102501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793515  102501 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 23:00 /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793728  102501 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:00 /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.793767  102501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85601.pem
	I1009 23:20:08.799268  102501 command_runner.go:130] > 51391683
	I1009 23:20:08.799330  102501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/85601.pem /etc/ssl/certs/51391683.0"
	I1009 23:20:08.808528  102501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:20:08.812734  102501 command_runner.go:130] > ca.crt
	I1009 23:20:08.812758  102501 command_runner.go:130] > ca.key
	I1009 23:20:08.812767  102501 command_runner.go:130] > healthcheck-client.crt
	I1009 23:20:08.812774  102501 command_runner.go:130] > healthcheck-client.key
	I1009 23:20:08.812781  102501 command_runner.go:130] > peer.crt
	I1009 23:20:08.812795  102501 command_runner.go:130] > peer.key
	I1009 23:20:08.812805  102501 command_runner.go:130] > server.crt
	I1009 23:20:08.812811  102501 command_runner.go:130] > server.key
	I1009 23:20:08.812865  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 23:20:08.818235  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.818502  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 23:20:08.823810  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.823867  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 23:20:08.829311  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.829363  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 23:20:08.834768  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.834881  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 23:20:08.840297  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.840408  102501 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 23:20:08.845708  102501 command_runner.go:130] > Certificate will not expire
	I1009 23:20:08.845923  102501 kubeadm.go:404] StartCluster: {Name:multinode-921619 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-921619 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.121 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:20:08.846092  102501 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 23:20:08.865084  102501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:20:08.874606  102501 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 23:20:08.874632  102501 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 23:20:08.874640  102501 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 23:20:08.874646  102501 command_runner.go:130] > member
	I1009 23:20:08.874782  102501 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1009 23:20:08.874797  102501 kubeadm.go:636] restartCluster start
	I1009 23:20:08.874847  102501 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 23:20:08.883134  102501 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:08.883547  102501 kubeconfig.go:135] verify returned: extract IP: "multinode-921619" does not appear in /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:08.883643  102501 kubeconfig.go:146] "multinode-921619" context is missing from /home/jenkins/minikube-integration/17375-78415/kubeconfig - will repair!
	I1009 23:20:08.883929  102501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/kubeconfig: {Name:mkee061910efe3fb616ee347e2e0b1635aa74f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:08.884285  102501 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:08.884476  102501 kapi.go:59] client config for multinode-921619: &rest.Config{Host:"https://192.168.39.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key", CAFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c11c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:08.885034  102501 cert_rotation.go:137] Starting client certificate rotation controller
	I1009 23:20:08.885230  102501 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 23:20:08.893659  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:08.893722  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:08.904105  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:08.904125  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:08.904163  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:08.913942  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:09.414697  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:09.414781  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:09.426226  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:09.914885  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:09.914970  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:09.926398  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:10.415011  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:10.415087  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:10.426306  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:10.914953  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:10.915058  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:10.926103  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:11.414639  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:11.414715  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:11.426162  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:11.914754  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:11.914836  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:11.925929  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:12.414630  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:12.414711  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:12.426487  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:12.914151  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:12.914267  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:12.925531  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:13.414101  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:13.414226  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:13.425312  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:13.914840  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:13.914911  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:13.926078  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:14.414754  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:14.414833  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:14.426130  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:14.914766  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:14.914846  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:14.926607  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:15.414104  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:15.414170  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:15.425651  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:15.914234  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:15.914310  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:15.927045  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:16.414690  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:16.414793  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:16.426039  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:16.914658  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:16.914742  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:16.926340  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:17.414980  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:17.415089  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:17.426566  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:17.914826  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:17.914898  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:17.926293  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:18.414769  102501 api_server.go:166] Checking apiserver status ...
	I1009 23:20:18.414869  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 23:20:18.426110  102501 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 23:20:18.894695  102501 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1009 23:20:18.894736  102501 kubeadm.go:1128] stopping kube-system containers ...
	I1009 23:20:18.894817  102501 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 23:20:18.920004  102501 command_runner.go:130] > 453f6dce464b
	I1009 23:20:18.920026  102501 command_runner.go:130] > 88d988a42798
	I1009 23:20:18.920033  102501 command_runner.go:130] > af05e798f2ed
	I1009 23:20:18.920040  102501 command_runner.go:130] > 865e9ceee649
	I1009 23:20:18.920051  102501 command_runner.go:130] > fbb07f20fa16
	I1009 23:20:18.920057  102501 command_runner.go:130] > ce86ce17dc12
	I1009 23:20:18.920063  102501 command_runner.go:130] > 96f26fc70c3e
	I1009 23:20:18.920070  102501 command_runner.go:130] > 3f140f1b444f
	I1009 23:20:18.920079  102501 command_runner.go:130] > aa6841202730
	I1009 23:20:18.920085  102501 command_runner.go:130] > 2c47ae8aed1a
	I1009 23:20:18.920090  102501 command_runner.go:130] > cb0e5b797b8d
	I1009 23:20:18.920097  102501 command_runner.go:130] > ac1bbc7d4311
	I1009 23:20:18.920109  102501 command_runner.go:130] > 3e987851ad86
	I1009 23:20:18.920115  102501 command_runner.go:130] > 7ca4344ccad3
	I1009 23:20:18.920123  102501 command_runner.go:130] > 3b09d0826e99
	I1009 23:20:18.920132  102501 command_runner.go:130] > 665cbd4fad77
	I1009 23:20:18.920137  102501 command_runner.go:130] > 6d2453b4ccbd
	I1009 23:20:18.920142  102501 command_runner.go:130] > 225f665e1777
	I1009 23:20:18.920146  102501 command_runner.go:130] > b387ab7d9878
	I1009 23:20:18.920153  102501 command_runner.go:130] > 84496d0bb2a9
	I1009 23:20:18.920157  102501 command_runner.go:130] > 3c097ec42a79
	I1009 23:20:18.920160  102501 command_runner.go:130] > acc138948996
	I1009 23:20:18.920164  102501 command_runner.go:130] > ac407d90f64c
	I1009 23:20:18.920170  102501 command_runner.go:130] > 66ffe93c503b
	I1009 23:20:18.920173  102501 command_runner.go:130] > 28ea40be486c
	I1009 23:20:18.920177  102501 command_runner.go:130] > 4a9e9455ca75
	I1009 23:20:18.920185  102501 command_runner.go:130] > 866b3c026498
	I1009 23:20:18.920192  102501 command_runner.go:130] > 6807030f028b
	I1009 23:20:18.920196  102501 command_runner.go:130] > 1f3e1b00829d
	I1009 23:20:18.920200  102501 command_runner.go:130] > 8f01da7e8d17
	I1009 23:20:18.920203  102501 command_runner.go:130] > 41105b4ddb01
	I1009 23:20:18.920209  102501 command_runner.go:130] > 7ed3b793352f
	I1009 23:20:18.920232  102501 docker.go:464] Stopping containers: [453f6dce464b 88d988a42798 af05e798f2ed 865e9ceee649 fbb07f20fa16 ce86ce17dc12 96f26fc70c3e 3f140f1b444f aa6841202730 2c47ae8aed1a cb0e5b797b8d ac1bbc7d4311 3e987851ad86 7ca4344ccad3 3b09d0826e99 665cbd4fad77 6d2453b4ccbd 225f665e1777 b387ab7d9878 84496d0bb2a9 3c097ec42a79 acc138948996 ac407d90f64c 66ffe93c503b 28ea40be486c 4a9e9455ca75 866b3c026498 6807030f028b 1f3e1b00829d 8f01da7e8d17 41105b4ddb01 7ed3b793352f]
	I1009 23:20:18.920290  102501 ssh_runner.go:195] Run: docker stop 453f6dce464b 88d988a42798 af05e798f2ed 865e9ceee649 fbb07f20fa16 ce86ce17dc12 96f26fc70c3e 3f140f1b444f aa6841202730 2c47ae8aed1a cb0e5b797b8d ac1bbc7d4311 3e987851ad86 7ca4344ccad3 3b09d0826e99 665cbd4fad77 6d2453b4ccbd 225f665e1777 b387ab7d9878 84496d0bb2a9 3c097ec42a79 acc138948996 ac407d90f64c 66ffe93c503b 28ea40be486c 4a9e9455ca75 866b3c026498 6807030f028b 1f3e1b00829d 8f01da7e8d17 41105b4ddb01 7ed3b793352f
	I1009 23:20:18.941155  102501 command_runner.go:130] > 453f6dce464b
	I1009 23:20:18.941181  102501 command_runner.go:130] > 88d988a42798
	I1009 23:20:18.941188  102501 command_runner.go:130] > af05e798f2ed
	I1009 23:20:18.941193  102501 command_runner.go:130] > 865e9ceee649
	I1009 23:20:18.941197  102501 command_runner.go:130] > fbb07f20fa16
	I1009 23:20:18.941201  102501 command_runner.go:130] > ce86ce17dc12
	I1009 23:20:18.941205  102501 command_runner.go:130] > 96f26fc70c3e
	I1009 23:20:18.941208  102501 command_runner.go:130] > 3f140f1b444f
	I1009 23:20:18.941212  102501 command_runner.go:130] > aa6841202730
	I1009 23:20:18.941229  102501 command_runner.go:130] > 2c47ae8aed1a
	I1009 23:20:18.941233  102501 command_runner.go:130] > cb0e5b797b8d
	I1009 23:20:18.941237  102501 command_runner.go:130] > ac1bbc7d4311
	I1009 23:20:18.941240  102501 command_runner.go:130] > 3e987851ad86
	I1009 23:20:18.941244  102501 command_runner.go:130] > 7ca4344ccad3
	I1009 23:20:18.941248  102501 command_runner.go:130] > 3b09d0826e99
	I1009 23:20:18.941252  102501 command_runner.go:130] > 665cbd4fad77
	I1009 23:20:18.941255  102501 command_runner.go:130] > 6d2453b4ccbd
	I1009 23:20:18.941259  102501 command_runner.go:130] > 225f665e1777
	I1009 23:20:18.941266  102501 command_runner.go:130] > b387ab7d9878
	I1009 23:20:18.941273  102501 command_runner.go:130] > 84496d0bb2a9
	I1009 23:20:18.941284  102501 command_runner.go:130] > 3c097ec42a79
	I1009 23:20:18.941288  102501 command_runner.go:130] > acc138948996
	I1009 23:20:18.941291  102501 command_runner.go:130] > ac407d90f64c
	I1009 23:20:18.941294  102501 command_runner.go:130] > 66ffe93c503b
	I1009 23:20:18.941298  102501 command_runner.go:130] > 28ea40be486c
	I1009 23:20:18.941310  102501 command_runner.go:130] > 4a9e9455ca75
	I1009 23:20:18.941316  102501 command_runner.go:130] > 866b3c026498
	I1009 23:20:18.941341  102501 command_runner.go:130] > 6807030f028b
	I1009 23:20:18.941348  102501 command_runner.go:130] > 1f3e1b00829d
	I1009 23:20:18.941352  102501 command_runner.go:130] > 8f01da7e8d17
	I1009 23:20:18.941355  102501 command_runner.go:130] > 41105b4ddb01
	I1009 23:20:18.941359  102501 command_runner.go:130] > 7ed3b793352f
	I1009 23:20:18.942364  102501 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 23:20:18.957255  102501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:20:18.965998  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1009 23:20:18.966020  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1009 23:20:18.966027  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1009 23:20:18.966040  102501 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:20:18.966076  102501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:20:18.966121  102501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:20:18.974215  102501 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1009 23:20:18.974250  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:19.097836  102501 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:20:19.097866  102501 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1009 23:20:19.097877  102501 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1009 23:20:19.097887  102501 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 23:20:19.097896  102501 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1009 23:20:19.097907  102501 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1009 23:20:19.097921  102501 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1009 23:20:19.097933  102501 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1009 23:20:19.097951  102501 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1009 23:20:19.097964  102501 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 23:20:19.097981  102501 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 23:20:19.097989  102501 command_runner.go:130] > [certs] Using the existing "sa" key
	I1009 23:20:19.098013  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:19.149974  102501 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:20:19.370640  102501 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:20:19.439952  102501 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:20:19.587309  102501 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:20:19.936787  102501 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:20:19.939697  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.131820  102501 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:20:20.131849  102501 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:20:20.131856  102501 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1009 23:20:20.131884  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.227804  102501 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:20:20.228566  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:20:20.242318  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:20:20.245953  102501 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:20:20.251081  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:20.324130  102501 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:20:20.324282  102501 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:20:20.324357  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:20.341546  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:20.855041  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.354529  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.855051  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:21.919528  102501 command_runner.go:130] > 1551
	I1009 23:20:21.919913  102501 api_server.go:72] duration metric: took 1.595628276s to wait for apiserver process to appear ...
	I1009 23:20:21.919937  102501 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:20:21.919958  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.240612  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 23:20:26.240642  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 23:20:26.240656  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.281987  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 23:20:26.282014  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 23:20:26.782530  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:26.799701  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1009 23:20:26.799737  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1009 23:20:27.282386  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:27.291905  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1009 23:20:27.291952  102501 api_server.go:103] status: https://192.168.39.167:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1009 23:20:27.782074  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:27.787200  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1009 23:20:27.787284  102501 round_trippers.go:463] GET https://192.168.39.167:8443/version
	I1009 23:20:27.787294  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:27.787303  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:27.787309  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:27.795028  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:27.795050  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:27.795061  102501 round_trippers.go:580]     Content-Length: 263
	I1009 23:20:27.795068  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:27 GMT
	I1009 23:20:27.795081  102501 round_trippers.go:580]     Audit-Id: 8612a343-110c-4656-9675-619c27f9fb3a
	I1009 23:20:27.795092  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:27.795100  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:27.795113  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:27.795121  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:27.795153  102501 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1009 23:20:27.795240  102501 api_server.go:141] control plane version: v1.28.2
	I1009 23:20:27.795257  102501 api_server.go:131] duration metric: took 5.875313407s to wait for apiserver health ...
	I1009 23:20:27.795267  102501 cni.go:84] Creating CNI manager for ""
	I1009 23:20:27.795275  102501 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:27.797117  102501 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 23:20:27.798586  102501 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:20:27.805179  102501 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1009 23:20:27.805206  102501 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1009 23:20:27.805215  102501 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1009 23:20:27.805222  102501 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:20:27.805227  102501 command_runner.go:130] > Access: 2023-10-09 23:19:55.241785686 +0000
	I1009 23:20:27.805232  102501 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1009 23:20:27.805237  102501 command_runner.go:130] > Change: 2023-10-09 23:19:53.416785686 +0000
	I1009 23:20:27.805240  102501 command_runner.go:130] >  Birth: -
	I1009 23:20:27.805418  102501 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 23:20:27.805435  102501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:20:27.849027  102501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:20:29.016702  102501 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:29.016723  102501 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:29.016729  102501 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1009 23:20:29.016734  102501 command_runner.go:130] > daemonset.apps/kindnet configured
	I1009 23:20:29.016759  102501 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.167702436s)
	I1009 23:20:29.016782  102501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:20:29.016915  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:29.016927  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.016934  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.016943  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.021909  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:29.021948  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.021959  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:28 GMT
	I1009 23:20:29.021965  102501 round_trippers.go:580]     Audit-Id: 518acb25-734a-4664-9406-181a1a4fb98e
	I1009 23:20:29.021971  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.021980  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.021988  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.022001  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.023404  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85005 chars]
	I1009 23:20:29.027593  102501 system_pods.go:59] 12 kube-system pods found
	I1009 23:20:29.027628  102501 system_pods.go:61] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 23:20:29.027636  102501 system_pods.go:61] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 23:20:29.027649  102501 system_pods.go:61] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:29.027655  102501 system_pods.go:61] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1009 23:20:29.027659  102501 system_pods.go:61] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:29.027671  102501 system_pods.go:61] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 23:20:29.027678  102501 system_pods.go:61] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 23:20:29.027686  102501 system_pods.go:61] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:29.027690  102501 system_pods.go:61] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:29.027695  102501 system_pods.go:61] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 23:20:29.027709  102501 system_pods.go:61] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 23:20:29.027719  102501 system_pods.go:61] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 23:20:29.027726  102501 system_pods.go:74] duration metric: took 10.933921ms to wait for pod list to return data ...
	I1009 23:20:29.027735  102501 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:29.027789  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:29.027796  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.027803  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.027809  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.030194  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.030215  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.030233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.030242  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:28 GMT
	I1009 23:20:29.030250  102501 round_trippers.go:580]     Audit-Id: da209275-c78c-4cc2-9c60-1d8e90dd2d95
	I1009 23:20:29.030258  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.030266  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.030284  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.030549  102501 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1198"},"items":[{"metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9584 chars]
	I1009 23:20:29.031202  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:29.031226  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:29.031237  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:29.031241  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:29.031244  102501 node_conditions.go:105] duration metric: took 3.502641ms to run NodePressure ...
	I1009 23:20:29.031264  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 23:20:29.309864  102501 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1009 23:20:29.309885  102501 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1009 23:20:29.310008  102501 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1009 23:20:29.310123  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1009 23:20:29.310134  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.310142  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.310148  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.315687  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:29.315703  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.315710  102501 round_trippers.go:580]     Audit-Id: f6faf9c8-10a1-4bf8-b7a4-df5bc43a94d6
	I1009 23:20:29.315746  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.315761  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.315769  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.315776  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.315785  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.316152  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1200"},"items":[{"metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1133","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29793 chars]
	I1009 23:20:29.317202  102501 kubeadm.go:787] kubelet initialised
	I1009 23:20:29.317222  102501 kubeadm.go:788] duration metric: took 7.190657ms waiting for restarted kubelet to initialise ...
	I1009 23:20:29.317232  102501 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:29.317307  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:29.317322  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.317333  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.317347  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.323980  102501 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 23:20:29.323994  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.324001  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.324006  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.324011  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.324016  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.324021  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.324025  102501 round_trippers.go:580]     Audit-Id: 8eebb8f5-311c-4ad2-9ea9-d8d0bd3c654e
	I1009 23:20:29.326041  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1200"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85005 chars]
	I1009 23:20:29.328597  102501 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.328688  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:29.328701  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.328712  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.328727  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.330843  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.330862  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.330871  102501 round_trippers.go:580]     Audit-Id: 3bb7794f-1936-43e6-b1ba-216c53355977
	I1009 23:20:29.330879  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.330887  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.330896  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.330907  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.330912  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.331079  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:29.331467  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.331478  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.331485  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.331491  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.338607  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:29.338626  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.338635  102501 round_trippers.go:580]     Audit-Id: a7a6a9af-e924-462c-8ae8-526954ff5f5b
	I1009 23:20:29.338646  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.338654  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.338662  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.338671  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.338680  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.338820  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.339109  102501 pod_ready.go:97] node "multinode-921619" hosting pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.339132  102501 pod_ready.go:81] duration metric: took 10.516338ms waiting for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.339144  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.339163  102501 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.339218  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-921619
	I1009 23:20:29.339227  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.339237  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.339247  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.341167  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:29.341180  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.341186  102501 round_trippers.go:580]     Audit-Id: b707b15b-a4ea-4019-b9f8-499fdd5cfcbd
	I1009 23:20:29.341191  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.341199  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.341204  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.341210  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.341216  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.341596  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1133","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6306 chars]
	I1009 23:20:29.341943  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.341953  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.341960  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.341966  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.344226  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.344243  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.344252  102501 round_trippers.go:580]     Audit-Id: 7bfcd7f6-e563-45f4-bfeb-328a88a90153
	I1009 23:20:29.344261  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.344268  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.344276  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.344285  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.344295  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.344570  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.344854  102501 pod_ready.go:97] node "multinode-921619" hosting pod "etcd-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.344871  102501 pod_ready.go:81] duration metric: took 5.701111ms waiting for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.344878  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "etcd-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.344890  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.344935  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-921619
	I1009 23:20:29.344939  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.344945  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.344954  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.349124  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:29.349140  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.349148  102501 round_trippers.go:580]     Audit-Id: 914866a0-d61d-46f0-bc72-be03cc62eda6
	I1009 23:20:29.349156  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.349163  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.349169  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.349177  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.349184  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.349759  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-921619","namespace":"kube-system","uid":"bb483c09-0ecb-447b-a339-2494340bda70","resourceVersion":"1135","creationTimestamp":"2023-10-09T23:13:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.167:8443","kubernetes.io/config.hash":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.mirror":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.seen":"2023-10-09T23:13:02.202089577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7860 chars]
	I1009 23:20:29.350119  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.350136  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.350146  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.350155  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.353685  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:29.353701  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.353709  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.353717  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.353724  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.353735  102501 round_trippers.go:580]     Audit-Id: 92a4993b-7981-4e68-875f-a5a017aa0a98
	I1009 23:20:29.353743  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.353755  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.353923  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.354196  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-apiserver-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.354215  102501 pod_ready.go:81] duration metric: took 9.320646ms waiting for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.354226  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-apiserver-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.354237  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.354275  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-921619
	I1009 23:20:29.354282  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.354288  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.354294  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.357803  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:29.357831  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.357840  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.357849  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.357855  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.357863  102501 round_trippers.go:580]     Audit-Id: 250daee4-5637-49ae-bde7-fa6792df4c3e
	I1009 23:20:29.357871  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.357880  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.358306  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-921619","namespace":"kube-system","uid":"e39c9043-b776-4ae0-b79a-528bf4fe7198","resourceVersion":"1137","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.mirror":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I1009 23:20:29.417920  102501 request.go:629] Waited for 59.245609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.418011  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:29.418019  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.418035  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.418054  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.420290  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.420307  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.420314  102501 round_trippers.go:580]     Audit-Id: d8fa4025-6f68-4310-aa2e-b37f8f4a4a3a
	I1009 23:20:29.420320  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.420325  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.420330  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.420336  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.420341  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.420585  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:29.420920  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-controller-manager-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.420941  102501 pod_ready.go:81] duration metric: took 66.69663ms waiting for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.420954  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-controller-manager-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:29.420967  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:29.617395  102501 request.go:629] Waited for 196.359758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:29.617477  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:29.617482  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.617519  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.617532  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.620332  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:29.620354  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.620363  102501 round_trippers.go:580]     Audit-Id: d1d36c12-4d23-42bf-bf28-71af7c14b1c7
	I1009 23:20:29.620371  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.620378  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.620386  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.620392  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.620399  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.620669  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6nfdb","generateName":"kube-proxy-","namespace":"kube-system","uid":"5cbea5fb-98dd-4276-9b89-588271309935","resourceVersion":"1087","creationTimestamp":"2023-10-09T23:15:07Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1009 23:20:29.817514  102501 request.go:629] Waited for 196.366236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:29.817575  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:29.817580  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:29.817590  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:29.817597  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:29.820515  102501 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1009 23:20:29.820534  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:29.820541  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:29.820546  102501 round_trippers.go:580]     Content-Length: 210
	I1009 23:20:29.820551  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:29.820556  102501 round_trippers.go:580]     Audit-Id: 8227eb14-3466-4250-98ba-021ab11627ce
	I1009 23:20:29.820562  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:29.820569  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:29.820574  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:29.820674  102501 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-921619-m03\" not found","reason":"NotFound","details":{"name":"multinode-921619-m03","kind":"nodes"},"code":404}
	I1009 23:20:29.820884  102501 pod_ready.go:97] node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:29.820904  102501 pod_ready.go:81] duration metric: took 399.925167ms waiting for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:29.820913  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:29.820920  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.017460  102501 request.go:629] Waited for 196.386929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:30.017536  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:30.017544  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.017553  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.017581  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.020180  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.020204  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.020213  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.020222  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.020229  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.020237  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:29 GMT
	I1009 23:20:30.020244  102501 round_trippers.go:580]     Audit-Id: 98a4d584-c326-4dc7-9193-e761ac4fd0e3
	I1009 23:20:30.020253  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.020442  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlflz","generateName":"kube-proxy-","namespace":"kube-system","uid":"18003542-04f4-4330-8054-2e82da1f94f0","resourceVersion":"973","creationTimestamp":"2023-10-09T23:14:14Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:14:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I1009 23:20:30.217421  102501 request.go:629] Waited for 196.380894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:30.217544  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:30.217553  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.217562  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.217568  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.220192  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.220217  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.220232  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.220240  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.220248  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.220256  102501 round_trippers.go:580]     Audit-Id: a89f2229-dc84-4627-b943-7332ce83a64c
	I1009 23:20:30.220263  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.220271  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.220430  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619-m02","uid":"fccae5d8-c831-4dfb-91f9-523a6eb81706","resourceVersion":"992","creationTimestamp":"2023-10-09T23:18:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:18:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1009 23:20:30.220770  102501 pod_ready.go:92] pod "kube-proxy-qlflz" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:30.220789  102501 pod_ready.go:81] duration metric: took 399.862512ms waiting for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.220799  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.417137  102501 request.go:629] Waited for 196.269389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:30.417206  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:30.417211  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.417227  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.417233  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.419855  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.419881  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.419892  102501 round_trippers.go:580]     Audit-Id: e1af97bb-0e90-44f6-8d14-ee4fff9bd10f
	I1009 23:20:30.419904  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.419912  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.419920  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.419928  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.419937  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.420094  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t28g5","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339","resourceVersion":"1150","creationTimestamp":"2023-10-09T23:13:22Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I1009 23:20:30.616959  102501 request.go:629] Waited for 196.346937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:30.617038  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:30.617046  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.617057  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.617066  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.619760  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.619783  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.619790  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.619795  102501 round_trippers.go:580]     Audit-Id: cb4c9dca-c672-4ccc-a686-5159e6fd16e9
	I1009 23:20:30.619802  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.619810  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.619819  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.619827  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.619954  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:30.620396  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-proxy-t28g5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:30.620418  102501 pod_ready.go:81] duration metric: took 399.611293ms waiting for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:30.620432  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-proxy-t28g5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:30.620448  102501 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:30.817964  102501 request.go:629] Waited for 197.418808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:30.818066  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:30.818078  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:30.818090  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:30.818102  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:30.820622  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:30.820646  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:30.820653  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:30.820659  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:30.820664  102501 round_trippers.go:580]     Audit-Id: fa0f7a63-74e5-4e41-8c2a-65f36ffa341f
	I1009 23:20:30.820670  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:30.820679  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:30.820693  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:30.820982  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-921619","namespace":"kube-system","uid":"9dc6b59f-e995-4b55-a755-8190f5c2d586","resourceVersion":"1140","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.mirror":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I1009 23:20:31.017899  102501 request.go:629] Waited for 196.384549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.017963  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.017973  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.017988  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.018026  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.020942  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.020966  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.020976  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.020988  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.020997  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:31.021005  102501 round_trippers.go:580]     Audit-Id: 97e01f4c-0d60-4628-8a3e-51d05eaa36c4
	I1009 23:20:31.021013  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.021025  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.021146  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.021600  102501 pod_ready.go:97] node "multinode-921619" hosting pod "kube-scheduler-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:31.021649  102501 pod_ready.go:81] duration metric: took 401.189696ms waiting for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:31.021666  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619" hosting pod "kube-scheduler-multinode-921619" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-921619" has status "Ready":"False"
	I1009 23:20:31.021677  102501 pod_ready.go:38] duration metric: took 1.704428487s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:31.021702  102501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 23:20:31.033315  102501 command_runner.go:130] > -16
	I1009 23:20:31.033350  102501 ops.go:34] apiserver oom_adj: -16
	I1009 23:20:31.033359  102501 kubeadm.go:640] restartCluster took 22.158555077s
	I1009 23:20:31.033368  102501 kubeadm.go:406] StartCluster complete in 22.18745007s
	I1009 23:20:31.033390  102501 settings.go:142] acquiring lock: {Name:mkfad4f7073b09104d7b3dee9986ba7dad256c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:31.033474  102501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:31.034150  102501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/kubeconfig: {Name:mkee061910efe3fb616ee347e2e0b1635aa74f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:31.034392  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 23:20:31.034426  102501 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1009 23:20:31.037258  102501 out.go:177] * Enabled addons: 
	I1009 23:20:31.034672  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:31.034742  102501 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:20:31.038654  102501 addons.go:502] enable addons completed in 4.249113ms: enabled=[]
	I1009 23:20:31.039000  102501 kapi.go:59] client config for multinode-921619: &rest.Config{Host:"https://192.168.39.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/client.key", CAFile:"/home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c11c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:31.039473  102501 round_trippers.go:463] GET https://192.168.39.167:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:20:31.039492  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.039504  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.039518  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.042220  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.042241  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.042251  102501 round_trippers.go:580]     Content-Length: 292
	I1009 23:20:31.042259  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:30 GMT
	I1009 23:20:31.042267  102501 round_trippers.go:580]     Audit-Id: 95a565c6-0506-4cce-a2bb-068426327003
	I1009 23:20:31.042277  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.042289  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.042299  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.042311  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.042344  102501 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"61b33a8d-11f2-4ba8-a069-c1ca4e52a49d","resourceVersion":"1199","creationTimestamp":"2023-10-09T23:13:10Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1009 23:20:31.042517  102501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-921619" context rescaled to 1 replicas
	I1009 23:20:31.042557  102501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 23:20:31.045125  102501 out.go:177] * Verifying Kubernetes components...
	I1009 23:20:31.046508  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:31.180668  102501 command_runner.go:130] > apiVersion: v1
	I1009 23:20:31.180694  102501 command_runner.go:130] > data:
	I1009 23:20:31.180703  102501 command_runner.go:130] >   Corefile: |
	I1009 23:20:31.180709  102501 command_runner.go:130] >     .:53 {
	I1009 23:20:31.180715  102501 command_runner.go:130] >         log
	I1009 23:20:31.180721  102501 command_runner.go:130] >         errors
	I1009 23:20:31.180726  102501 command_runner.go:130] >         health {
	I1009 23:20:31.180736  102501 command_runner.go:130] >            lameduck 5s
	I1009 23:20:31.180741  102501 command_runner.go:130] >         }
	I1009 23:20:31.180752  102501 command_runner.go:130] >         ready
	I1009 23:20:31.180761  102501 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1009 23:20:31.180768  102501 command_runner.go:130] >            pods insecure
	I1009 23:20:31.180791  102501 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1009 23:20:31.180802  102501 command_runner.go:130] >            ttl 30
	I1009 23:20:31.180808  102501 command_runner.go:130] >         }
	I1009 23:20:31.180816  102501 command_runner.go:130] >         prometheus :9153
	I1009 23:20:31.180823  102501 command_runner.go:130] >         hosts {
	I1009 23:20:31.180832  102501 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1009 23:20:31.180847  102501 command_runner.go:130] >            fallthrough
	I1009 23:20:31.180854  102501 command_runner.go:130] >         }
	I1009 23:20:31.180863  102501 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1009 23:20:31.180876  102501 command_runner.go:130] >            max_concurrent 1000
	I1009 23:20:31.180886  102501 command_runner.go:130] >         }
	I1009 23:20:31.180894  102501 command_runner.go:130] >         cache 30
	I1009 23:20:31.180907  102501 command_runner.go:130] >         loop
	I1009 23:20:31.180918  102501 command_runner.go:130] >         reload
	I1009 23:20:31.180925  102501 command_runner.go:130] >         loadbalance
	I1009 23:20:31.180932  102501 command_runner.go:130] >     }
	I1009 23:20:31.180940  102501 command_runner.go:130] > kind: ConfigMap
	I1009 23:20:31.180947  102501 command_runner.go:130] > metadata:
	I1009 23:20:31.180955  102501 command_runner.go:130] >   creationTimestamp: "2023-10-09T23:13:10Z"
	I1009 23:20:31.180964  102501 command_runner.go:130] >   name: coredns
	I1009 23:20:31.180972  102501 command_runner.go:130] >   namespace: kube-system
	I1009 23:20:31.180980  102501 command_runner.go:130] >   resourceVersion: "392"
	I1009 23:20:31.180989  102501 command_runner.go:130] >   uid: 3631ac3c-f1e2-4b20-ba21-bc50514ba3c3
	I1009 23:20:31.181101  102501 node_ready.go:35] waiting up to 6m0s for node "multinode-921619" to be "Ready" ...
	I1009 23:20:31.181135  102501 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1009 23:20:31.217500  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.217524  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.217537  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.217543  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.220138  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:31.220156  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.220163  102501 round_trippers.go:580]     Audit-Id: f27f37e9-7444-4b59-9f23-f3d455a0ea11
	I1009 23:20:31.220168  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.220173  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.220178  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.220186  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.220194  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.220412  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.417158  102501 request.go:629] Waited for 196.304477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.417235  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.417246  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.417261  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.417270  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.420791  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:31.420812  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.420824  102501 round_trippers.go:580]     Audit-Id: 15ab2a29-df6e-41c6-b262-45effc55088f
	I1009 23:20:31.420830  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.420836  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.420842  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.420851  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.420859  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.421729  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:31.922831  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:31.922850  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:31.922862  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:31.922884  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:31.928746  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:31.928769  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:31.928776  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:31.928782  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:31.928787  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:31.928792  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:31.928799  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:31 GMT
	I1009 23:20:31.928811  102501 round_trippers.go:580]     Audit-Id: 2f128c8b-da62-4cc1-88d7-2e80bc044c62
	I1009 23:20:31.929877  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:32.422580  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:32.422603  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:32.422612  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:32.422618  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:32.425437  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:32.425460  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:32.425467  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:32.425472  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:32 GMT
	I1009 23:20:32.425482  102501 round_trippers.go:580]     Audit-Id: f51c1120-577c-42ba-8224-490b9dfbb5e6
	I1009 23:20:32.425488  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:32.425493  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:32.425498  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:32.425658  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:32.923122  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:32.923145  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:32.923154  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:32.923160  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:32.926112  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:32.926132  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:32.926143  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:32 GMT
	I1009 23:20:32.926151  102501 round_trippers.go:580]     Audit-Id: f63787e8-5fe1-4121-9889-46b7b827e392
	I1009 23:20:32.926158  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:32.926166  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:32.926172  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:32.926180  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:32.926410  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1127","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I1009 23:20:33.423129  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.423153  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.423161  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.423167  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.425944  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.425966  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.425975  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.425982  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.425990  102501 round_trippers.go:580]     Audit-Id: 58b0dce3-207f-4190-b22d-47e7b23c5a53
	I1009 23:20:33.425998  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.426007  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.426014  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.426212  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.426545  102501 node_ready.go:49] node "multinode-921619" has status "Ready":"True"
	I1009 23:20:33.426561  102501 node_ready.go:38] duration metric: took 2.245426892s waiting for node "multinode-921619" to be "Ready" ...
	I1009 23:20:33.426570  102501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:33.426619  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:33.426626  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.426640  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.426646  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.432232  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:33.432248  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.432257  102501 round_trippers.go:580]     Audit-Id: 2cd4d682-2a6d-4297-8c1e-80c6bd1a3ae3
	I1009 23:20:33.432266  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.432274  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.432282  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.432290  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.432301  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.433210  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1213"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84415 chars]
	I1009 23:20:33.435800  102501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:33.435880  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.435889  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.435897  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.435902  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.441427  102501 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 23:20:33.441452  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.441461  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.441467  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.441472  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.441477  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.441486  102501 round_trippers.go:580]     Audit-Id: 0d66d537-177f-4e10-837e-2948c506db3d
	I1009 23:20:33.441491  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.441616  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.442045  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.442058  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.442065  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.442070  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.444241  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.444257  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.444266  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.444275  102501 round_trippers.go:580]     Audit-Id: b0ec77d6-df06-4a82-a55c-7d7e907f46c5
	I1009 23:20:33.444283  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.444292  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.444301  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.444311  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.444460  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.444775  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.444788  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.444798  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.444806  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.446862  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.446882  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.446892  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.446900  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.446910  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.446919  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.446924  102501 round_trippers.go:580]     Audit-Id: cbfc2cb5-cecf-432e-8a81-e0b3843571e6
	I1009 23:20:33.446930  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.447085  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.447492  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.447507  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.447517  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.447531  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.449190  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:33.449203  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.449209  102501 round_trippers.go:580]     Audit-Id: 7f85b8af-ccac-4847-a48e-1983ca2a27a9
	I1009 23:20:33.449214  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.449219  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.449224  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.449245  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.449251  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.449374  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:33.950490  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:33.950512  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.950521  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.950527  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.953588  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:33.953609  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.953621  102501 round_trippers.go:580]     Audit-Id: d3668535-5fdc-4c30-9f13-866cd609737d
	I1009 23:20:33.953629  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.953641  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.953650  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.953661  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.953671  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.953951  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:33.954405  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:33.954416  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:33.954424  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:33.954429  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:33.956577  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:33.956592  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:33.956601  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:33.956608  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:33.956617  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:33.956627  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:33.956637  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:33 GMT
	I1009 23:20:33.956653  102501 round_trippers.go:580]     Audit-Id: d894d933-6de0-40c4-8e47-8860e6558204
	I1009 23:20:33.956879  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:34.450606  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:34.450634  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.450648  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.450656  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.453643  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.453661  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.453668  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.453674  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.453679  102501 round_trippers.go:580]     Audit-Id: c2486ecd-2503-4c31-818a-826c3eca4681
	I1009 23:20:34.453684  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.453689  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.453694  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.453878  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:34.454428  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:34.454447  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.454471  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.454481  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.456524  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.456538  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.456544  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.456550  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.456556  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.456562  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.456570  102501 round_trippers.go:580]     Audit-Id: f01a4fed-03f2-482c-a932-c76f5f3a978e
	I1009 23:20:34.456575  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.456840  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:34.950585  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:34.950609  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.950617  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.950623  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.954173  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:34.954193  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.954216  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.954222  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.954229  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.954238  102501 round_trippers.go:580]     Audit-Id: 07c18ec1-717a-4b19-9d72-21b74a6b64ad
	I1009 23:20:34.954246  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.954255  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.954449  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:34.954915  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:34.954928  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:34.954935  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:34.954941  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:34.957289  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:34.957308  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:34.957316  102501 round_trippers.go:580]     Audit-Id: 99ce2485-a0e0-43ce-8c06-54f21abe6301
	I1009 23:20:34.957325  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:34.957330  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:34.957338  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:34.957344  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:34.957349  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:34 GMT
	I1009 23:20:34.957886  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:35.450656  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:35.450687  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.450700  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.450709  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.453447  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.453465  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.453472  102501 round_trippers.go:580]     Audit-Id: 1b799ce7-2c5d-424f-8268-673ef85820b6
	I1009 23:20:35.453478  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.453483  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.453488  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.453493  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.453498  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.453691  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:35.454289  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:35.454309  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.454320  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.454329  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.456449  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.456468  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.456477  102501 round_trippers.go:580]     Audit-Id: 41356d37-7f1a-42e5-9190-2894a1af2276
	I1009 23:20:35.456487  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.456494  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.456502  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.456511  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.456518  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.456924  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:35.457211  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:35.950676  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:35.950715  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.950725  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.950733  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.953503  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.953521  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.953531  102501 round_trippers.go:580]     Audit-Id: 8b1fbc96-38f3-4d0e-90a9-7631660dab75
	I1009 23:20:35.953536  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.953541  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.953546  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.953551  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.953555  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.953765  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:35.954413  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:35.954429  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:35.954439  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:35.954448  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:35.956765  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:35.956788  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:35.956796  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:35 GMT
	I1009 23:20:35.956802  102501 round_trippers.go:580]     Audit-Id: c8c65a02-3ace-4e9d-bdb5-554a2b21a08d
	I1009 23:20:35.956807  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:35.956821  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:35.956828  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:35.956837  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:35.956942  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:36.450595  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:36.450617  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.450625  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.450631  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.453589  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.453613  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.453625  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.453633  102501 round_trippers.go:580]     Audit-Id: 4c2e7bef-4838-4567-9c22-574f60a5cbfc
	I1009 23:20:36.453640  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.453648  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.453656  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.453666  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.453847  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:36.454314  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:36.454325  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.454339  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.454348  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.456697  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.456713  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.456720  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.456725  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.456730  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.456737  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.456742  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.456747  102501 round_trippers.go:580]     Audit-Id: a28d550e-72fc-411a-b2c0-b82691b3d1a3
	I1009 23:20:36.456933  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:36.950614  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:36.950637  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.950646  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.950652  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.953322  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.953348  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.953359  102501 round_trippers.go:580]     Audit-Id: 9f94d7cf-e192-4798-a808-ae495b6a5dc0
	I1009 23:20:36.953377  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.953383  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.953388  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.953393  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.953399  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.953615  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:36.954051  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:36.954063  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:36.954070  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:36.954075  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:36.956087  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:36.956101  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:36.956107  102501 round_trippers.go:580]     Audit-Id: ae602bc2-ec26-4b82-a7ba-9386dc3ced98
	I1009 23:20:36.956112  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:36.956117  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:36.956125  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:36.956143  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:36.956159  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:36 GMT
	I1009 23:20:36.956559  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:37.450302  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:37.450334  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.450347  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.450357  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.453118  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.453136  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.453143  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.453148  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.453153  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.453158  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.453164  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.453168  102501 round_trippers.go:580]     Audit-Id: d8f4b962-c4d2-4ef2-8910-e4ef9cf07e7a
	I1009 23:20:37.453347  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:37.453949  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:37.453963  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.453974  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.453985  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.456136  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.456148  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.456154  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.456162  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.456167  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.456172  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.456177  102501 round_trippers.go:580]     Audit-Id: 8039e311-517e-441c-a4ce-3f7153387b2c
	I1009 23:20:37.456182  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.456378  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:37.457296  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:37.950553  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:37.950581  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.950593  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.950602  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.953661  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:37.953687  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.953697  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.953705  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.953714  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.953722  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.953730  102501 round_trippers.go:580]     Audit-Id: 8581c215-d165-4428-b59b-cd6196f50f8c
	I1009 23:20:37.953737  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.954054  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:37.954524  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:37.954535  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:37.954543  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:37.954548  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:37.957060  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:37.957081  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:37.957090  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:37.957099  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:37.957106  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:37.957115  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:37.957122  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:37 GMT
	I1009 23:20:37.957129  102501 round_trippers.go:580]     Audit-Id: 5487da8d-0781-4273-9481-e9476cb19a26
	I1009 23:20:37.957757  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:38.450541  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:38.450573  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.450586  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.450632  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.453295  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:38.453314  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.453323  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.453332  102501 round_trippers.go:580]     Audit-Id: e8294c39-f93d-4bd6-90ac-a4a425f619ca
	I1009 23:20:38.453338  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.453346  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.453353  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.453362  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.453753  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:38.454201  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:38.454215  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.454222  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.454228  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.456835  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:38.456856  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.456865  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.456873  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.456880  102501 round_trippers.go:580]     Audit-Id: 8661ad97-c4ac-428f-b70b-04c7d1742d82
	I1009 23:20:38.456887  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.456894  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.456906  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.457908  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:38.950784  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:38.950810  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.950819  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.950825  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.955797  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:38.955817  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.955827  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.955834  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.955842  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.955849  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.955856  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.955870  102501 round_trippers.go:580]     Audit-Id: 0476ef7e-c3c4-408f-9971-2cfff635aa22
	I1009 23:20:38.956478  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:38.956961  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:38.956975  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:38.956985  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:38.956994  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:38.960366  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:38.960388  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:38.960397  102501 round_trippers.go:580]     Audit-Id: 939873c9-4aa3-4709-a0eb-f4ae58af1a39
	I1009 23:20:38.960405  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:38.960414  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:38.960422  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:38.960431  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:38.960439  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:38 GMT
	I1009 23:20:38.960636  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.450297  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:39.450336  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.450349  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.450358  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.453393  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:39.453418  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.453428  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.453436  102501 round_trippers.go:580]     Audit-Id: 02006573-bdfe-485c-96e5-865d0d5dc79a
	I1009 23:20:39.453444  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.453452  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.453458  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.453466  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.453688  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:39.454212  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:39.454227  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.454235  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.454243  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.456618  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.456639  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.456648  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.456655  102501 round_trippers.go:580]     Audit-Id: 0813b480-1559-4de5-8d1e-6a66e1806d0a
	I1009 23:20:39.456663  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.456670  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.456677  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.456685  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.456888  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.950666  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:39.950688  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.950697  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.950703  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.953686  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.953711  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.953721  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.953729  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.953736  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.953744  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.953752  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.953760  102501 round_trippers.go:580]     Audit-Id: 2d09ac8b-6324-4c53-8889-4495fb395f12
	I1009 23:20:39.954305  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:39.954825  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:39.954839  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:39.954847  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:39.954852  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:39.957134  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:39.957151  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:39.957161  102501 round_trippers.go:580]     Audit-Id: cdfda355-ccb5-41c2-aa66-e6ac8badbb2a
	I1009 23:20:39.957170  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:39.957182  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:39.957197  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:39.957206  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:39.957219  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:39 GMT
	I1009 23:20:39.957404  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:39.957726  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:40.450024  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:40.450045  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.450054  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.450060  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.453142  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:40.453168  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.453179  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.453187  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.453195  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.453202  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.453210  102501 round_trippers.go:580]     Audit-Id: 4baf2498-8a31-4f5e-b0ff-af643558e31a
	I1009 23:20:40.453217  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.453430  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:40.453938  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:40.453953  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.453960  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.453966  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.456238  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.456252  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.456258  102501 round_trippers.go:580]     Audit-Id: 48d647bb-7746-440b-a8c9-bfc473f05f84
	I1009 23:20:40.456264  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.456272  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.456280  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.456293  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.456306  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.456441  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:40.950136  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:40.950161  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.950170  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.950176  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.952915  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.952944  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.952954  102501 round_trippers.go:580]     Audit-Id: 5cadf7f4-8cde-450d-ac16-88e7044a8cb7
	I1009 23:20:40.952961  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.952969  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.952978  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.952987  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.952996  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.953378  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:40.953933  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:40.953950  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:40.953962  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:40.953975  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:40.956174  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:40.956191  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:40.956198  102501 round_trippers.go:580]     Audit-Id: cae46e89-2922-4dcf-b9ab-334199af84a8
	I1009 23:20:40.956204  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:40.956211  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:40.956217  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:40.956224  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:40.956234  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:40 GMT
	I1009 23:20:40.956388  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:41.450010  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:41.450033  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.450042  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.450048  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.453201  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:41.453226  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.453237  102501 round_trippers.go:580]     Audit-Id: b156692c-6495-4edf-95bf-0ee131f2d945
	I1009 23:20:41.453249  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.453256  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.453261  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.453268  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.453273  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.453837  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:41.454410  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:41.454427  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.454438  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.454448  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.456567  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.456586  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.456595  102501 round_trippers.go:580]     Audit-Id: c9dc1713-32ca-4291-9f10-a4a8099346b2
	I1009 23:20:41.456606  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.456614  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.456627  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.456636  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.456646  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.456842  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:41.950542  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:41.950566  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.950575  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.950582  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.953575  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.953599  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.953606  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.953612  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.953618  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.953628  102501 round_trippers.go:580]     Audit-Id: 000ce8d0-2dbb-40a6-b484-ad04e5b43314
	I1009 23:20:41.953635  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.953641  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.953787  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:41.954263  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:41.954275  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:41.954282  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:41.954288  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:41.956309  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:41.956329  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:41.956338  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:41.956345  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:41 GMT
	I1009 23:20:41.956356  102501 round_trippers.go:580]     Audit-Id: 2a0a22bd-96d8-4498-bc83-d56ca261bc9e
	I1009 23:20:41.956363  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:41.956374  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:41.956385  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:41.956598  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:42.450250  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:42.450273  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.450282  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.450288  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.453440  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.453458  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.453465  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.453471  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.453477  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.453482  102501 round_trippers.go:580]     Audit-Id: ef242ea2-170d-4dbb-9f84-095d55874b92
	I1009 23:20:42.453490  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.453496  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.453680  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:42.454149  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:42.454163  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.454172  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.454178  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.458035  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.458050  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.458059  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.458067  102501 round_trippers.go:580]     Audit-Id: 2550c4db-473c-43e5-a656-9ae7b1e3ec7f
	I1009 23:20:42.458075  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.458085  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.458093  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.458102  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.458829  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:42.459276  102501 pod_ready.go:102] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"False"
	I1009 23:20:42.950345  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:42.950365  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.950376  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.950383  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.952895  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:42.952914  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.952922  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.952930  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.952939  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.952946  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.952953  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.952962  102501 round_trippers.go:580]     Audit-Id: 365954c9-b6d3-4982-9b8a-94a8c6a38b24
	I1009 23:20:42.953175  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:42.953641  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:42.953653  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:42.953661  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:42.953667  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:42.956880  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:42.956901  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:42.956908  102501 round_trippers.go:580]     Audit-Id: d83801ca-ef60-44e6-ab20-505d75c7f0bc
	I1009 23:20:42.956918  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:42.956926  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:42.956936  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:42.956944  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:42.956956  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:42 GMT
	I1009 23:20:42.957072  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:43.450745  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:43.450768  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.450777  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.450782  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.453815  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:43.453836  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.453844  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.453851  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.453858  102501 round_trippers.go:580]     Audit-Id: 729df228-740a-4ccd-810f-d5815b39d10f
	I1009 23:20:43.453867  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.453875  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.453882  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.454086  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:43.454668  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:43.454681  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.454692  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.454702  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.457415  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:43.457436  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.457446  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.457455  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.457463  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.457475  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.457490  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.457498  102501 round_trippers.go:580]     Audit-Id: 10531947-3aeb-48d5-a013-fee9f94bf55c
	I1009 23:20:43.457995  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:43.950709  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:43.950731  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.950744  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.950750  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.954962  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:43.954988  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.954999  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.955008  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.955016  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.955023  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.955034  102501 round_trippers.go:580]     Audit-Id: 3e5595b0-6e28-4125-937a-1bfe46bbd865
	I1009 23:20:43.955042  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.955197  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:43.955673  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:43.955685  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:43.955693  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:43.955698  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:43.957906  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:43.957923  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:43.957933  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:43.957942  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:43.957949  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:43 GMT
	I1009 23:20:43.957957  102501 round_trippers.go:580]     Audit-Id: 8e59839b-4b8a-4355-9bd0-f5905da14813
	I1009 23:20:43.957964  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:43.957972  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:43.958216  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.450571  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:44.450599  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.450612  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.450621  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.453574  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.453590  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.453598  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.453603  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.453608  102501 round_trippers.go:580]     Audit-Id: 701a95e1-644f-47ba-a8fe-039e7e489cf5
	I1009 23:20:44.453613  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.453619  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.453624  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.453852  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1147","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I1009 23:20:44.454473  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.454491  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.454501  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.454516  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.456695  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.456717  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.456728  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.456736  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.456742  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.456753  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.456761  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.456774  102501 round_trippers.go:580]     Audit-Id: 8259a205-4926-466b-8eb4-dc7f362828e1
	I1009 23:20:44.456993  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.950904  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m56ds
	I1009 23:20:44.950938  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.950951  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.950961  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.954163  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:44.954182  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.954191  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.954199  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.954207  102501 round_trippers.go:580]     Audit-Id: f17773d0-5364-4e3c-abfa-567d417ce0e4
	I1009 23:20:44.954214  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.954223  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.954233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.954673  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I1009 23:20:44.955122  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.955133  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.955140  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.955146  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.957283  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.957299  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.957306  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.957315  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.957321  102501 round_trippers.go:580]     Audit-Id: d0478365-3c47-4914-acef-c750200ca712
	I1009 23:20:44.957329  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.957335  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.957343  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.957669  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.957968  102501 pod_ready.go:92] pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.957986  102501 pod_ready.go:81] duration metric: took 11.522164121s waiting for pod "coredns-5dd5756b68-m56ds" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.957998  102501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.958059  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-921619
	I1009 23:20:44.958069  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.958079  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.958089  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.960240  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.960256  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.960262  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.960268  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.960273  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.960278  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.960282  102501 round_trippers.go:580]     Audit-Id: 974f8aa0-1058-41d8-87c4-c2bade8f9075
	I1009 23:20:44.960291  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.960901  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-921619","namespace":"kube-system","uid":"5642d3e0-eecc-4fce-a750-9c68f66042e8","resourceVersion":"1236","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.167:2379","kubernetes.io/config.hash":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.mirror":"51389476e64a88c1fb4ad2d7318e8384","kubernetes.io/config.seen":"2023-10-09T23:13:10.214448400Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6082 chars]
	I1009 23:20:44.961281  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.961292  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.961299  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.961305  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.963201  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.963219  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.963228  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.963236  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.963258  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.963270  102501 round_trippers.go:580]     Audit-Id: 7cd46612-3ce4-48dd-999b-0eaa5ffba4c1
	I1009 23:20:44.963283  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.963295  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.963447  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.963749  102501 pod_ready.go:92] pod "etcd-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.963763  102501 pod_ready.go:81] duration metric: took 5.759104ms waiting for pod "etcd-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.963780  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.963828  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-921619
	I1009 23:20:44.963835  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.963842  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.963848  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.966062  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:44.966078  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.966092  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.966099  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.966107  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.966115  102501 round_trippers.go:580]     Audit-Id: ea54d090-70f8-471b-942e-38a9e8424516
	I1009 23:20:44.966127  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.966137  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.966305  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-921619","namespace":"kube-system","uid":"bb483c09-0ecb-447b-a339-2494340bda70","resourceVersion":"1215","creationTimestamp":"2023-10-09T23:13:09Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.167:8443","kubernetes.io/config.hash":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.mirror":"3992fff0ca56642e7b8e9139e8dd6a1b","kubernetes.io/config.seen":"2023-10-09T23:13:02.202089577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7616 chars]
	I1009 23:20:44.966762  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.966778  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.966788  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.966796  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.968643  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.968660  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.968671  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.968678  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.968683  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.968689  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.968697  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.968708  102501 round_trippers.go:580]     Audit-Id: 0882ad92-7872-4bae-a419-4526ad37647b
	I1009 23:20:44.968909  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.969230  102501 pod_ready.go:92] pod "kube-apiserver-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.969245  102501 pod_ready.go:81] duration metric: took 5.45575ms waiting for pod "kube-apiserver-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.969254  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.969305  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-921619
	I1009 23:20:44.969313  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.969319  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.969325  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.971182  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.971201  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.971210  102501 round_trippers.go:580]     Audit-Id: eba575ff-0f79-4d59-aa23-831769e821e0
	I1009 23:20:44.971218  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.971226  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.971233  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.971259  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.971265  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.971549  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-921619","namespace":"kube-system","uid":"e39c9043-b776-4ae0-b79a-528bf4fe7198","resourceVersion":"1221","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.mirror":"5029a9f6494c3e91f8e10e5de930fb7a","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452022Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I1009 23:20:44.971939  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:44.971952  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.971959  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.971965  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.973731  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.973748  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.973756  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.973765  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.973774  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.973780  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.973786  102501 round_trippers.go:580]     Audit-Id: 6ecb7555-b0cc-4193-b823-4fdec82d35eb
	I1009 23:20:44.973791  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.973925  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:44.974200  102501 pod_ready.go:92] pod "kube-controller-manager-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:44.974214  102501 pod_ready.go:81] duration metric: took 4.949426ms waiting for pod "kube-controller-manager-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.974223  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:44.974272  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6nfdb
	I1009 23:20:44.974283  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.974293  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.974306  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.976081  102501 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:20:44.976095  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.976102  102501 round_trippers.go:580]     Audit-Id: 0b021f28-7fc8-42d9-9a5b-9bff16c9f8f5
	I1009 23:20:44.976107  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.976112  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.976117  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.976122  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.976127  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.976244  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6nfdb","generateName":"kube-proxy-","namespace":"kube-system","uid":"5cbea5fb-98dd-4276-9b89-588271309935","resourceVersion":"1087","creationTimestamp":"2023-10-09T23:15:07Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I1009 23:20:44.976607  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m03
	I1009 23:20:44.976619  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:44.976626  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:44.976632  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:44.978313  102501 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I1009 23:20:44.978322  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:44.978328  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:44.978334  102501 round_trippers.go:580]     Content-Length: 210
	I1009 23:20:44.978342  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:44 GMT
	I1009 23:20:44.978351  102501 round_trippers.go:580]     Audit-Id: 187e7715-12d3-40c8-ba73-48e29062ebe2
	I1009 23:20:44.978363  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:44.978370  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:44.978376  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:44.978452  102501 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-921619-m03\" not found","reason":"NotFound","details":{"name":"multinode-921619-m03","kind":"nodes"},"code":404}
	I1009 23:20:44.978548  102501 pod_ready.go:97] node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:44.978561  102501 pod_ready.go:81] duration metric: took 4.332634ms waiting for pod "kube-proxy-6nfdb" in "kube-system" namespace to be "Ready" ...
	E1009 23:20:44.978569  102501 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-921619-m03" hosting pod "kube-proxy-6nfdb" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-921619-m03": nodes "multinode-921619-m03" not found
	I1009 23:20:44.978575  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.150898  102501 request.go:629] Waited for 172.264891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:45.150973  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlflz
	I1009 23:20:45.150980  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.150993  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.151008  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.153692  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.153711  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.153718  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.153724  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.153729  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.153734  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.153745  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.153750  102501 round_trippers.go:580]     Audit-Id: 1c7d393b-15ca-416d-9939-5120ca21de4d
	I1009 23:20:45.153868  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-qlflz","generateName":"kube-proxy-","namespace":"kube-system","uid":"18003542-04f4-4330-8054-2e82da1f94f0","resourceVersion":"973","creationTimestamp":"2023-10-09T23:14:14Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:14:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5750 chars]
	I1009 23:20:45.351681  102501 request.go:629] Waited for 197.379442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:45.351746  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619-m02
	I1009 23:20:45.351756  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.351769  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.351779  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.354658  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.354677  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.354684  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.354690  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.354699  102501 round_trippers.go:580]     Audit-Id: bf8f1e3c-f3b9-41f6-a533-853c4960c94f
	I1009 23:20:45.354714  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.354721  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.354738  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.354947  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619-m02","uid":"fccae5d8-c831-4dfb-91f9-523a6eb81706","resourceVersion":"992","creationTimestamp":"2023-10-09T23:18:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:18:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3253 chars]
	I1009 23:20:45.355244  102501 pod_ready.go:92] pod "kube-proxy-qlflz" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:45.355260  102501 pod_ready.go:81] duration metric: took 376.677019ms waiting for pod "kube-proxy-qlflz" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.355270  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.551819  102501 request.go:629] Waited for 196.475136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:45.551890  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t28g5
	I1009 23:20:45.551901  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.551912  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.551921  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.555808  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:45.555840  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.555850  102501 round_trippers.go:580]     Audit-Id: 183db5e6-6cf3-467c-ab4d-01de8ad3bad8
	I1009 23:20:45.555858  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.555866  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.555873  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.555881  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.555890  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.556097  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t28g5","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339","resourceVersion":"1207","creationTimestamp":"2023-10-09T23:13:22Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"397c0b68-e3eb-4745-879b-9ebb950e99c2","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397c0b68-e3eb-4745-879b-9ebb950e99c2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I1009 23:20:45.750871  102501 request.go:629] Waited for 194.305299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:45.750950  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:45.750962  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.750974  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.750987  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.753850  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:45.753869  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.753879  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.753887  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.753894  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.753901  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.753908  102501 round_trippers.go:580]     Audit-Id: a58516a3-a80a-46d3-977c-3cd88f17b3d5
	I1009 23:20:45.753917  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.754077  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:45.754532  102501 pod_ready.go:92] pod "kube-proxy-t28g5" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:45.754554  102501 pod_ready.go:81] duration metric: took 399.276515ms waiting for pod "kube-proxy-t28g5" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.754567  102501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:45.950958  102501 request.go:629] Waited for 196.305216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:45.951034  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-921619
	I1009 23:20:45.951041  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:45.951053  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:45.951065  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:45.954563  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:45.954586  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:45.954595  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:45.954603  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:45.954618  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:45.954626  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:45.954637  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:45 GMT
	I1009 23:20:45.954647  102501 round_trippers.go:580]     Audit-Id: db7cc390-b3ad-4335-a1c4-ff6a07f55ba0
	I1009 23:20:45.954968  102501 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-921619","namespace":"kube-system","uid":"9dc6b59f-e995-4b55-a755-8190f5c2d586","resourceVersion":"1219","creationTimestamp":"2023-10-09T23:13:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.mirror":"791efd99637773aca959cb55de9c4adc","kubernetes.io/config.seen":"2023-10-09T23:13:10.214452753Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I1009 23:20:46.151772  102501 request.go:629] Waited for 196.378051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:46.151849  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes/multinode-921619
	I1009 23:20:46.151857  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.151865  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.151871  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.154311  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:46.154326  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.154339  102501 round_trippers.go:580]     Audit-Id: b4312cf1-e832-4b2f-908d-937dc67188bf
	I1009 23:20:46.154352  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.154360  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.154368  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.154376  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.154387  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.154935  102501 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-10-09T23:13:06Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I1009 23:20:46.155246  102501 pod_ready.go:92] pod "kube-scheduler-multinode-921619" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:46.155262  102501 pod_ready.go:81] duration metric: took 400.684725ms waiting for pod "kube-scheduler-multinode-921619" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:46.155276  102501 pod_ready.go:38] duration metric: took 12.728696491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:46.155306  102501 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:20:46.155360  102501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:20:46.168221  102501 command_runner.go:130] > 1551
	I1009 23:20:46.168250  102501 api_server.go:72] duration metric: took 15.125659604s to wait for apiserver process to appear ...
	I1009 23:20:46.168258  102501 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:20:46.168274  102501 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:20:46.173103  102501 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1009 23:20:46.173166  102501 round_trippers.go:463] GET https://192.168.39.167:8443/version
	I1009 23:20:46.173178  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.173188  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.173198  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.174086  102501 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 23:20:46.174102  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.174111  102501 round_trippers.go:580]     Audit-Id: 251714ce-1c94-42b9-a8ab-32715e6a22d6
	I1009 23:20:46.174120  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.174131  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.174150  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.174166  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.174175  102501 round_trippers.go:580]     Content-Length: 263
	I1009 23:20:46.174188  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.174212  102501 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1009 23:20:46.174262  102501 api_server.go:141] control plane version: v1.28.2
	I1009 23:20:46.174278  102501 api_server.go:131] duration metric: took 6.014ms to wait for apiserver health ...
	I1009 23:20:46.174288  102501 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:20:46.351709  102501 request.go:629] Waited for 177.345783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.351843  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.351856  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.351868  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.351886  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.359372  102501 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 23:20:46.359402  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.359410  102501 round_trippers.go:580]     Audit-Id: 7020ed98-f629-4cbe-b064-45c47376cfa8
	I1009 23:20:46.359415  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.359420  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.359425  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.359430  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.359436  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.361102  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1257"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83389 chars]
	I1009 23:20:46.364625  102501 system_pods.go:59] 12 kube-system pods found
	I1009 23:20:46.364658  102501 system_pods.go:61] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running
	I1009 23:20:46.364665  102501 system_pods.go:61] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running
	I1009 23:20:46.364671  102501 system_pods.go:61] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:46.364678  102501 system_pods.go:61] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running
	I1009 23:20:46.364685  102501 system_pods.go:61] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:46.364693  102501 system_pods.go:61] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running
	I1009 23:20:46.364700  102501 system_pods.go:61] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running
	I1009 23:20:46.364707  102501 system_pods.go:61] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:46.364720  102501 system_pods.go:61] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:46.364726  102501 system_pods.go:61] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running
	I1009 23:20:46.364736  102501 system_pods.go:61] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running
	I1009 23:20:46.364745  102501 system_pods.go:61] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running
	I1009 23:20:46.364755  102501 system_pods.go:74] duration metric: took 190.457725ms to wait for pod list to return data ...
	I1009 23:20:46.364767  102501 default_sa.go:34] waiting for default service account to be created ...
	I1009 23:20:46.551260  102501 request.go:629] Waited for 186.405273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:20:46.551350  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:20:46.551357  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.551376  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.551390  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.554613  102501 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:46.554632  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.554641  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.554647  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.554653  102501 round_trippers.go:580]     Content-Length: 262
	I1009 23:20:46.554658  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.554664  102501 round_trippers.go:580]     Audit-Id: eb20c408-c19c-4657-b12f-b799b4d76f81
	I1009 23:20:46.554670  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.554679  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.554709  102501 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1258"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c91b1817-a383-4590-ae03-64162cee6fef","resourceVersion":"335","creationTimestamp":"2023-10-09T23:13:22Z"}}]}
	I1009 23:20:46.554933  102501 default_sa.go:45] found service account: "default"
	I1009 23:20:46.554954  102501 default_sa.go:55] duration metric: took 190.176623ms for default service account to be created ...
	I1009 23:20:46.554965  102501 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 23:20:46.751450  102501 request.go:629] Waited for 196.397016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.751524  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:46.751544  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.751558  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.751572  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.755611  102501 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:46.755632  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.755639  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.755646  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.755654  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.755663  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.755672  102501 round_trippers.go:580]     Audit-Id: 0009ccfa-6d5b-4fda-83db-3c422b0352c2
	I1009 23:20:46.755680  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.757010  102501 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1258"},"items":[{"metadata":{"name":"coredns-5dd5756b68-m56ds","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2898e186-93b2-49f3-9e87-2f6c4f5619ef","resourceVersion":"1248","creationTimestamp":"2023-10-09T23:13:23Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"e8b8c3e5-64e9-4429-b9c6-396f44d33653","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:13:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e8b8c3e5-64e9-4429-b9c6-396f44d33653\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83389 chars]
	I1009 23:20:46.759494  102501 system_pods.go:86] 12 kube-system pods found
	I1009 23:20:46.759514  102501 system_pods.go:89] "coredns-5dd5756b68-m56ds" [2898e186-93b2-49f3-9e87-2f6c4f5619ef] Running
	I1009 23:20:46.759522  102501 system_pods.go:89] "etcd-multinode-921619" [5642d3e0-eecc-4fce-a750-9c68f66042e8] Running
	I1009 23:20:46.759526  102501 system_pods.go:89] "kindnet-ddwsx" [2475cf58-f505-4b9f-b133-dcd2cdb74489] Running
	I1009 23:20:46.759530  102501 system_pods.go:89] "kindnet-mvhgv" [c66b80a9-b1d2-43b8-b1f2-a9be10b998a6] Running
	I1009 23:20:46.759535  102501 system_pods.go:89] "kindnet-w7ch7" [21dbde88-f1f9-40d2-9893-8ee4b88088bd] Running
	I1009 23:20:46.759542  102501 system_pods.go:89] "kube-apiserver-multinode-921619" [bb483c09-0ecb-447b-a339-2494340bda70] Running
	I1009 23:20:46.759553  102501 system_pods.go:89] "kube-controller-manager-multinode-921619" [e39c9043-b776-4ae0-b79a-528bf4fe7198] Running
	I1009 23:20:46.759559  102501 system_pods.go:89] "kube-proxy-6nfdb" [5cbea5fb-98dd-4276-9b89-588271309935] Running
	I1009 23:20:46.759565  102501 system_pods.go:89] "kube-proxy-qlflz" [18003542-04f4-4330-8054-2e82da1f94f0] Running
	I1009 23:20:46.759573  102501 system_pods.go:89] "kube-proxy-t28g5" [e6e517cb-b1f0-4baa-9bb8-7eb0a8f4c339] Running
	I1009 23:20:46.759578  102501 system_pods.go:89] "kube-scheduler-multinode-921619" [9dc6b59f-e995-4b55-a755-8190f5c2d586] Running
	I1009 23:20:46.759584  102501 system_pods.go:89] "storage-provisioner" [cdc4f60e-144f-44b8-ac4f-741589b7146f] Running
	I1009 23:20:46.759590  102501 system_pods.go:126] duration metric: took 204.615857ms to wait for k8s-apps to be running ...
	I1009 23:20:46.759607  102501 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:20:46.759672  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:46.773659  102501 system_svc.go:56] duration metric: took 14.042695ms WaitForService to wait for kubelet.
	I1009 23:20:46.773696  102501 kubeadm.go:581] duration metric: took 15.731104662s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:20:46.773713  102501 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:46.951138  102501 request.go:629] Waited for 177.328875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:46.951197  102501 round_trippers.go:463] GET https://192.168.39.167:8443/api/v1/nodes
	I1009 23:20:46.951202  102501 round_trippers.go:469] Request Headers:
	I1009 23:20:46.951210  102501 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:46.951216  102501 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 23:20:46.953890  102501 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:46.953916  102501 round_trippers.go:577] Response Headers:
	I1009 23:20:46.953926  102501 round_trippers.go:580]     Audit-Id: 4dca49e1-a499-4563-ab9b-cf42c45625d0
	I1009 23:20:46.953935  102501 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:46.953942  102501 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:46.953950  102501 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: f8e0e310-25ab-4361-9c8c-1ced2dda7a95
	I1009 23:20:46.953958  102501 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2641a0f3-2c8f-431d-a02c-f4b66d8e8f5a
	I1009 23:20:46.953967  102501 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:46 GMT
	I1009 23:20:46.954192  102501 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1259"},"items":[{"metadata":{"name":"multinode-921619","uid":"d36fe70b-a6ce-4a0e-8059-86e3939b2f35","resourceVersion":"1213","creationTimestamp":"2023-10-09T23:13:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-921619","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-921619","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_13_11_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 9457 chars]
	I1009 23:20:46.954646  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:46.954664  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:46.954676  102501 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:20:46.954680  102501 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:46.954684  102501 node_conditions.go:105] duration metric: took 180.967837ms to run NodePressure ...
	I1009 23:20:46.954697  102501 start.go:228] waiting for startup goroutines ...
	I1009 23:20:46.954704  102501 start.go:233] waiting for cluster config update ...
	I1009 23:20:46.954710  102501 start.go:242] writing updated cluster config ...
	I1009 23:20:46.955165  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:20:46.955244  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:46.958391  102501 out.go:177] * Starting worker node multinode-921619-m02 in cluster multinode-921619
	I1009 23:20:46.959599  102501 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:20:46.959618  102501 cache.go:57] Caching tarball of preloaded images
	I1009 23:20:46.959708  102501 preload.go:174] Found /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1009 23:20:46.959752  102501 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1009 23:20:46.959850  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:20:46.960012  102501 start.go:365] acquiring machines lock for multinode-921619-m02: {Name:mk4d06451f08f4d0dfbc191a7a07492b6e7c9c1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 23:20:46.960057  102501 start.go:369] acquired machines lock for "multinode-921619-m02" in 23.889µs
	I1009 23:20:46.960070  102501 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:20:46.960100  102501 fix.go:54] fixHost starting: m02
	I1009 23:20:46.960364  102501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:20:46.960385  102501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:20:46.975273  102501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I1009 23:20:46.975676  102501 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:20:46.976086  102501 main.go:141] libmachine: Using API Version  1
	I1009 23:20:46.976107  102501 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:20:46.976465  102501 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:20:46.976685  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:20:46.976840  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetState
	I1009 23:20:46.978216  102501 fix.go:102] recreateIfNeeded on multinode-921619-m02: state=Stopped err=<nil>
	I1009 23:20:46.978237  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	W1009 23:20:46.978399  102501 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:20:46.980512  102501 out.go:177] * Restarting existing kvm2 VM for "multinode-921619-m02" ...
	I1009 23:20:46.982010  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .Start
	I1009 23:20:46.982178  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring networks are active...
	I1009 23:20:46.983008  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring network default is active
	I1009 23:20:46.983363  102501 main.go:141] libmachine: (multinode-921619-m02) Ensuring network mk-multinode-921619 is active
	I1009 23:20:46.983694  102501 main.go:141] libmachine: (multinode-921619-m02) Getting domain xml...
	I1009 23:20:46.984368  102501 main.go:141] libmachine: (multinode-921619-m02) Creating domain...
	I1009 23:20:48.219485  102501 main.go:141] libmachine: (multinode-921619-m02) Waiting to get IP...
	I1009 23:20:48.220359  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.220751  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.220838  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.220751  102756 retry.go:31] will retry after 245.464617ms: waiting for machine to come up
	I1009 23:20:48.468314  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.469046  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.469082  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.469004  102756 retry.go:31] will retry after 350.744462ms: waiting for machine to come up
	I1009 23:20:48.821651  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:48.822041  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:48.822074  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:48.821996  102756 retry.go:31] will retry after 470.473303ms: waiting for machine to come up
	I1009 23:20:49.293577  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:49.294000  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:49.294027  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:49.293956  102756 retry.go:31] will retry after 528.498289ms: waiting for machine to come up
	I1009 23:20:49.823754  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:49.824205  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:49.824239  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:49.824149  102756 retry.go:31] will retry after 599.07991ms: waiting for machine to come up
	I1009 23:20:50.425102  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:50.425578  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:50.425608  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:50.425558  102756 retry.go:31] will retry after 943.690172ms: waiting for machine to come up
	I1009 23:20:51.370851  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:51.371291  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:51.371313  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:51.371246  102756 retry.go:31] will retry after 854.904577ms: waiting for machine to come up
	I1009 23:20:52.227662  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:52.228276  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:52.228306  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:52.228191  102756 retry.go:31] will retry after 917.09776ms: waiting for machine to come up
	I1009 23:20:53.146757  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:53.147192  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:53.147219  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:53.147154  102756 retry.go:31] will retry after 1.295311521s: waiting for machine to come up
	I1009 23:20:54.444793  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:54.445242  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:54.445268  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:54.445172  102756 retry.go:31] will retry after 1.672827257s: waiting for machine to come up
	I1009 23:20:56.120177  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:56.120699  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:56.120730  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:56.120643  102756 retry.go:31] will retry after 2.846317127s: waiting for machine to come up
	I1009 23:20:58.968533  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:20:58.968968  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:20:58.968998  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:20:58.968916  102756 retry.go:31] will retry after 2.625389438s: waiting for machine to come up
	I1009 23:21:01.597675  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:01.598117  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | unable to find current IP address of domain multinode-921619-m02 in network mk-multinode-921619
	I1009 23:21:01.598146  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | I1009 23:21:01.598064  102756 retry.go:31] will retry after 3.673921353s: waiting for machine to come up
	I1009 23:21:05.275970  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.276389  102501 main.go:141] libmachine: (multinode-921619-m02) Found IP for machine: 192.168.39.121
	I1009 23:21:05.276417  102501 main.go:141] libmachine: (multinode-921619-m02) Reserving static IP address...
	I1009 23:21:05.276435  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has current primary IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.276813  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "multinode-921619-m02", mac: "52:54:00:56:ca:45", ip: "192.168.39.121"} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.276874  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | skip adding static IP to network mk-multinode-921619 - found existing host DHCP lease matching {name: "multinode-921619-m02", mac: "52:54:00:56:ca:45", ip: "192.168.39.121"}
	I1009 23:21:05.276899  102501 main.go:141] libmachine: (multinode-921619-m02) Reserved static IP address: 192.168.39.121
	I1009 23:21:05.276916  102501 main.go:141] libmachine: (multinode-921619-m02) Waiting for SSH to be available...
	I1009 23:21:05.276935  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Getting to WaitForSSH function...
	I1009 23:21:05.278973  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.279297  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.279331  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.279458  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Using SSH client type: external
	I1009 23:21:05.279481  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa (-rw-------)
	I1009 23:21:05.279513  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:21:05.279533  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | About to run SSH command:
	I1009 23:21:05.279549  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | exit 0
	I1009 23:21:05.374318  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 23:21:05.374661  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetConfigRaw
	I1009 23:21:05.375254  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:05.377674  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.378063  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.378090  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.378311  102501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/multinode-921619/config.json ...
	I1009 23:21:05.378512  102501 machine.go:88] provisioning docker machine ...
	I1009 23:21:05.378529  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:05.378762  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.378938  102501 buildroot.go:166] provisioning hostname "multinode-921619-m02"
	I1009 23:21:05.378954  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.379121  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.381580  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.381916  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.381949  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.382097  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.382274  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.382429  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.382579  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.382753  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.383064  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.383078  102501 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-921619-m02 && echo "multinode-921619-m02" | sudo tee /etc/hostname
	I1009 23:21:05.526708  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-921619-m02
	
	I1009 23:21:05.526740  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.529479  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.529875  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.529901  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.530073  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.530273  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.530446  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.530597  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.530765  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.531082  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.531104  102501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-921619-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-921619-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-921619-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:21:05.668451  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:21:05.668486  102501 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17375-78415/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-78415/.minikube}
	I1009 23:21:05.668513  102501 buildroot.go:174] setting up certificates
	I1009 23:21:05.668525  102501 provision.go:83] configureAuth start
	I1009 23:21:05.668543  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetMachineName
	I1009 23:21:05.668856  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:05.672117  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.672492  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.672521  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.672621  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.674833  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.675258  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.675289  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.675379  102501 provision.go:138] copyHostCerts
	I1009 23:21:05.675418  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:21:05.675453  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem, removing ...
	I1009 23:21:05.675465  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:21:05.675534  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem (1123 bytes)
	I1009 23:21:05.675605  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:21:05.675625  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem, removing ...
	I1009 23:21:05.675631  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:21:05.675654  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem (1679 bytes)
	I1009 23:21:05.675696  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:21:05.675711  102501 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem, removing ...
	I1009 23:21:05.675717  102501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:21:05.675738  102501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem (1082 bytes)
	I1009 23:21:05.675781  102501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem org=jenkins.multinode-921619-m02 san=[192.168.39.121 192.168.39.121 localhost 127.0.0.1 minikube multinode-921619-m02]
	I1009 23:21:05.775297  102501 provision.go:172] copyRemoteCerts
	I1009 23:21:05.775364  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:21:05.775399  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.777922  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.778216  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.778241  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.778421  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.778618  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.778759  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.778903  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:05.871513  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:21:05.871585  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:21:05.898494  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:21:05.898564  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 23:21:05.924733  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:21:05.924807  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1009 23:21:05.950405  102501 provision.go:86] duration metric: configureAuth took 281.86296ms
	I1009 23:21:05.950428  102501 buildroot.go:189] setting minikube options for container-runtime
	I1009 23:21:05.950675  102501 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:21:05.950700  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:05.950985  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:05.953474  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.953818  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:05.953848  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:05.954012  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:05.954222  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.954392  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:05.954540  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:05.954775  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:05.955252  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:05.955270  102501 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 23:21:06.084257  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 23:21:06.084284  102501 buildroot.go:70] root file system type: tmpfs
	I1009 23:21:06.084443  102501 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 23:21:06.084467  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:06.087304  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.087702  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:06.087722  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.087930  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:06.088129  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.088329  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.088489  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:06.088630  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:06.088929  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:06.088987  102501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.167"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 23:21:06.235570  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.167
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 23:21:06.235608  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:06.238489  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.238958  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:06.238980  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:06.239186  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:06.239383  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.239528  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:06.239660  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:06.239802  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:06.240139  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:06.240165  102501 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 23:21:07.140108  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 23:21:07.140142  102501 machine.go:91] provisioned docker machine in 1.761612342s
	I1009 23:21:07.140154  102501 start.go:300] post-start starting for "multinode-921619-m02" (driver="kvm2")
	I1009 23:21:07.140165  102501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:21:07.140181  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.140568  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:21:07.140608  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.143238  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.143593  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.143628  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.143735  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.143932  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.144139  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.144298  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.241724  102501 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:21:07.246026  102501 command_runner.go:130] > NAME=Buildroot
	I1009 23:21:07.246048  102501 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1009 23:21:07.246055  102501 command_runner.go:130] > ID=buildroot
	I1009 23:21:07.246064  102501 command_runner.go:130] > VERSION_ID=2021.02.12
	I1009 23:21:07.246072  102501 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1009 23:21:07.246215  102501 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 23:21:07.246237  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/addons for local assets ...
	I1009 23:21:07.246303  102501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/files for local assets ...
	I1009 23:21:07.246394  102501 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> 856012.pem in /etc/ssl/certs
	I1009 23:21:07.246408  102501 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> /etc/ssl/certs/856012.pem
	I1009 23:21:07.246528  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:21:07.256350  102501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:21:07.280688  102501 start.go:303] post-start completed in 140.517748ms
	I1009 23:21:07.280709  102501 fix.go:56] fixHost completed within 20.320607071s
	I1009 23:21:07.280736  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.283160  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.283506  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.283538  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.283648  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.283836  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.284052  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.284222  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.284416  102501 main.go:141] libmachine: Using SSH client type: native
	I1009 23:21:07.284868  102501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I1009 23:21:07.284885  102501 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1009 23:21:07.415288  102501 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696893667.361858032
	
	I1009 23:21:07.415308  102501 fix.go:206] guest clock: 1696893667.361858032
	I1009 23:21:07.415323  102501 fix.go:219] Guest: 2023-10-09 23:21:07.361858032 +0000 UTC Remote: 2023-10-09 23:21:07.280714025 +0000 UTC m=+84.775359462 (delta=81.144007ms)
	I1009 23:21:07.415338  102501 fix.go:190] guest clock delta is within tolerance: 81.144007ms
	I1009 23:21:07.415343  102501 start.go:83] releasing machines lock for "multinode-921619-m02", held for 20.45527802s
	I1009 23:21:07.415385  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.415661  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:21:07.418237  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.418631  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.418664  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.420859  102501 out.go:177] * Found network options:
	I1009 23:21:07.422414  102501 out.go:177]   - NO_PROXY=192.168.39.167
	W1009 23:21:07.423800  102501 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 23:21:07.423827  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424371  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424563  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:21:07.424650  102501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:21:07.424698  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	W1009 23:21:07.424799  102501 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 23:21:07.424880  102501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:21:07.424909  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:21:07.427387  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427667  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427774  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.427799  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.427981  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.428060  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:17:49 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:21:07.428088  102501 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:21:07.428155  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.428260  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:21:07.428362  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.428427  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:21:07.428506  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.428552  102501 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:21:07.428701  102501 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:21:07.544604  102501 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:21:07.545508  102501 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 23:21:07.545555  102501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 23:21:07.545624  102501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:21:07.562734  102501 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1009 23:21:07.562776  102501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:21:07.562793  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:21:07.562952  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:21:07.579411  102501 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1009 23:21:07.579863  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 23:21:07.590123  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 23:21:07.600552  102501 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 23:21:07.600609  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 23:21:07.610642  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:21:07.620936  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 23:21:07.631499  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:21:07.641667  102501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:21:07.651948  102501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 23:21:07.662213  102501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:21:07.671249  102501 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:21:07.671396  102501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:21:07.681591  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:07.782428  102501 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 23:21:07.803946  102501 start.go:472] detecting cgroup driver to use...
	I1009 23:21:07.804035  102501 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 23:21:07.817049  102501 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1009 23:21:07.818004  102501 command_runner.go:130] > [Unit]
	I1009 23:21:07.818024  102501 command_runner.go:130] > Description=Docker Application Container Engine
	I1009 23:21:07.818030  102501 command_runner.go:130] > Documentation=https://docs.docker.com
	I1009 23:21:07.818035  102501 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1009 23:21:07.818041  102501 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1009 23:21:07.818046  102501 command_runner.go:130] > StartLimitBurst=3
	I1009 23:21:07.818050  102501 command_runner.go:130] > StartLimitIntervalSec=60
	I1009 23:21:07.818054  102501 command_runner.go:130] > [Service]
	I1009 23:21:07.818059  102501 command_runner.go:130] > Type=notify
	I1009 23:21:07.818063  102501 command_runner.go:130] > Restart=on-failure
	I1009 23:21:07.818071  102501 command_runner.go:130] > Environment=NO_PROXY=192.168.39.167
	I1009 23:21:07.818083  102501 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1009 23:21:07.818097  102501 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1009 23:21:07.818103  102501 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1009 23:21:07.818109  102501 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1009 23:21:07.818124  102501 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1009 23:21:07.818135  102501 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1009 23:21:07.818152  102501 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1009 23:21:07.818172  102501 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1009 23:21:07.818182  102501 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1009 23:21:07.818187  102501 command_runner.go:130] > ExecStart=
	I1009 23:21:07.818205  102501 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I1009 23:21:07.818216  102501 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1009 23:21:07.818226  102501 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1009 23:21:07.818303  102501 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1009 23:21:07.818320  102501 command_runner.go:130] > LimitNOFILE=infinity
	I1009 23:21:07.818327  102501 command_runner.go:130] > LimitNPROC=infinity
	I1009 23:21:07.818334  102501 command_runner.go:130] > LimitCORE=infinity
	I1009 23:21:07.818348  102501 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1009 23:21:07.818360  102501 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1009 23:21:07.818371  102501 command_runner.go:130] > TasksMax=infinity
	I1009 23:21:07.818379  102501 command_runner.go:130] > TimeoutStartSec=0
	I1009 23:21:07.818392  102501 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1009 23:21:07.818401  102501 command_runner.go:130] > Delegate=yes
	I1009 23:21:07.818410  102501 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1009 23:21:07.818425  102501 command_runner.go:130] > KillMode=process
	I1009 23:21:07.818436  102501 command_runner.go:130] > [Install]
	I1009 23:21:07.818444  102501 command_runner.go:130] > WantedBy=multi-user.target
	I1009 23:21:07.818713  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:21:07.831420  102501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:21:07.847570  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:21:07.860576  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:21:07.874179  102501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 23:21:07.910629  102501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:21:07.922800  102501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:21:07.940164  102501 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1009 23:21:07.940622  102501 ssh_runner.go:195] Run: which cri-dockerd
	I1009 23:21:07.944358  102501 command_runner.go:130] > /usr/bin/cri-dockerd
	I1009 23:21:07.944465  102501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 23:21:07.953761  102501 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 23:21:07.970298  102501 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 23:21:08.092815  102501 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 23:21:08.212602  102501 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 23:21:08.212638  102501 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 23:21:08.229521  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:08.331539  102501 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:21:09.763543  102501 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.431962056s)
	I1009 23:21:09.763613  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:21:09.865178  102501 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 23:21:09.980252  102501 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:21:10.091712  102501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:21:10.198034  102501 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 23:21:10.212997  102501 command_runner.go:130] ! Job failed. See "journalctl -xe" for details.
	I1009 23:21:10.215690  102501 out.go:177] 
	W1009 23:21:10.217058  102501 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1009 23:21:10.217073  102501 out.go:239] * 
	W1009 23:21:10.217948  102501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 23:21:10.219795  102501 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-09 23:19:54 UTC, ends at Mon 2023-10-09 23:21:11 UTC. --
	Oct 09 23:20:27 multinode-921619 dockerd[836]: time="2023-10-09T23:20:27.960248531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:27 multinode-921619 dockerd[836]: time="2023-10-09T23:20:27.960276837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:30 multinode-921619 cri-dockerd[1062]: time="2023-10-09T23:20:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9f110ad49a0fdcafca9876b02947033ab8257e5ee0e1cc83d588376dab9d9da9/resolv.conf as [nameserver 192.168.122.1]"
	Oct 09 23:20:30 multinode-921619 dockerd[836]: time="2023-10-09T23:20:30.912092737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:20:30 multinode-921619 dockerd[836]: time="2023-10-09T23:20:30.912149729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:30 multinode-921619 dockerd[836]: time="2023-10-09T23:20:30.912167984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:30 multinode-921619 dockerd[836]: time="2023-10-09T23:20:30.912179958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.386568080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.386652518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.386675705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.386685680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.716165998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.717092635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.720353155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:42 multinode-921619 dockerd[836]: time="2023-10-09T23:20:42.720366234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:42 multinode-921619 cri-dockerd[1062]: time="2023-10-09T23:20:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f39d1ee358aedff5794038b82716c3ffea1b9c89c72021ab2a06a731d8a79578/resolv.conf as [nameserver 192.168.122.1]"
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.028091397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.029029018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.029186538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.029298600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:43 multinode-921619 cri-dockerd[1062]: time="2023-10-09T23:20:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5111f4fb441348620927d143bfe29f1a414c6350dbb44278d1b01b2c3f47ea3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.527776696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.527920862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.528040041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:20:43 multinode-921619 dockerd[836]: time="2023-10-09T23:20:43.528057203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	638eca6d4e442       8c811b4aec35f       28 seconds ago      Running             busybox                   2                   e5111f4fb4413       busybox-5bc68d56bd-pbmjv
	9980e6a9f1954       ead0a4a53df89       29 seconds ago      Running             coredns                   2                   f39d1ee358aed       coredns-5dd5756b68-m56ds
	abb0b4d9b2dfe       c7d1297425461       41 seconds ago      Running             kindnet-cni               2                   9f110ad49a0fd       kindnet-mvhgv
	24b4659c9ab7d       6e38f40d628db       44 seconds ago      Running             storage-provisioner       2                   59dab2a3d8cf3       storage-provisioner
	0c38a812e625d       c120fed2beb84       44 seconds ago      Running             kube-proxy                2                   9113c7eed13cd       kube-proxy-t28g5
	f48e2b7e3977b       7a5d9d67a13f6       49 seconds ago      Running             kube-scheduler            2                   3402f69fbede0       kube-scheduler-multinode-921619
	c4a026affbb0a       55f13c92defb1       49 seconds ago      Running             kube-controller-manager   2                   e7eaf74bcacc5       kube-controller-manager-multinode-921619
	1f91aea248267       73deb9a3f7025       49 seconds ago      Running             etcd                      2                   6436dc5653f38       etcd-multinode-921619
	7e45cb6e61a02       cdcab12b2dd16       50 seconds ago      Running             kube-apiserver            2                   647bf3ebb1a52       kube-apiserver-multinode-921619
	6370b8717b18e       8c811b4aec35f       3 minutes ago       Exited              busybox                   1                   e4e03a6042d7f       busybox-5bc68d56bd-pbmjv
	453f6dce464b8       ead0a4a53df89       3 minutes ago       Exited              coredns                   1                   88d988a42798b       coredns-5dd5756b68-m56ds
	af05e798f2ed7       c7d1297425461       3 minutes ago       Exited              kindnet-cni               1                   3f140f1b444f0       kindnet-mvhgv
	865e9ceee649c       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       1                   ce86ce17dc126       storage-provisioner
	fbb07f20fa164       c120fed2beb84       3 minutes ago       Exited              kube-proxy                1                   96f26fc70c3e2       kube-proxy-t28g5
	aa68412027303       73deb9a3f7025       3 minutes ago       Exited              etcd                      1                   665cbd4fad776       etcd-multinode-921619
	2c47ae8aed1aa       7a5d9d67a13f6       3 minutes ago       Exited              kube-scheduler            1                   3e987851ad865       kube-scheduler-multinode-921619
	cb0e5b797b8d9       55f13c92defb1       3 minutes ago       Exited              kube-controller-manager   1                   7ca4344ccad38       kube-controller-manager-multinode-921619
	ac1bbc7d4311a       cdcab12b2dd16       3 minutes ago       Exited              kube-apiserver            1                   3b09d0826e99c       kube-apiserver-multinode-921619
	
	* 
	* ==> coredns [453f6dce464b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35964 - 56607 "HINFO IN 7342071988482304448.6697043081490530046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.060917784s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [9980e6a9f195] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52410 - 6701 "HINFO IN 2876914702332647359.3085475674604063090. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03415957s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-921619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-921619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90
	                    minikube.k8s.io/name=multinode-921619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_09T23_13_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-921619
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:21:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:20:33 +0000   Mon, 09 Oct 2023 23:13:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:20:33 +0000   Mon, 09 Oct 2023 23:13:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:20:33 +0000   Mon, 09 Oct 2023 23:13:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:20:33 +0000   Mon, 09 Oct 2023 23:20:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    multinode-921619
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 db3e7b2a74f24591a5788910a2250a7f
	  System UUID:                db3e7b2a-74f2-4591-a578-8910a2250a7f
	  Boot ID:                    94ed459a-0641-4328-9cb4-9a44a159407e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pbmjv                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 coredns-5dd5756b68-m56ds                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m48s
	  kube-system                 etcd-multinode-921619                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m1s
	  kube-system                 kindnet-mvhgv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m49s
	  kube-system                 kube-apiserver-multinode-921619             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-controller-manager-multinode-921619    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 kube-proxy-t28g5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-scheduler-multinode-921619             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m45s                kube-proxy       
	  Normal  Starting                 43s                  kube-proxy       
	  Normal  Starting                 3m52s                kube-proxy       
	  Normal  NodeAllocatableEnforced  8m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m9s (x8 over 8m9s)  kubelet          Node multinode-921619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m9s (x7 over 8m9s)  kubelet          Node multinode-921619 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m9s (x8 over 8m9s)  kubelet          Node multinode-921619 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m1s                 kubelet          Node multinode-921619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m1s                 kubelet          Node multinode-921619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m1s                 kubelet          Node multinode-921619 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m49s                node-controller  Node multinode-921619 event: Registered Node multinode-921619 in Controller
	  Normal  NodeReady                7m37s                kubelet          Node multinode-921619 status is now: NodeReady
	  Normal  Starting                 4m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m (x8 over 4m)      kubelet          Node multinode-921619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)      kubelet          Node multinode-921619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x7 over 4m)      kubelet          Node multinode-921619 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m42s                node-controller  Node multinode-921619 event: Registered Node multinode-921619 in Controller
	  Normal  Starting                 51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)    kubelet          Node multinode-921619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)    kubelet          Node multinode-921619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x7 over 51s)    kubelet          Node multinode-921619 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           33s                  node-controller  Node multinode-921619 event: Registered Node multinode-921619 in Controller
	
	
	Name:               multinode-921619-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-921619-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:18:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-921619-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:19:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:18:16 +0000   Mon, 09 Oct 2023 23:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:18:16 +0000   Mon, 09 Oct 2023 23:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:18:16 +0000   Mon, 09 Oct 2023 23:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:18:16 +0000   Mon, 09 Oct 2023 23:18:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    multinode-921619-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6dc6e5f0962f41598dd26a2770da6187
	  System UUID:                6dc6e5f0-962f-4159-8dd2-6a2770da6187
	  Boot ID:                    8f87fd49-cef7-4819-9a6d-bc889e037da1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-k4jdx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 kindnet-ddwsx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m57s
	  kube-system                 kube-proxy-qlflz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m3s                   kube-proxy       
	  Normal  Starting                 6m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m57s (x5 over 6m59s)  kubelet          Node multinode-921619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m57s (x5 over 6m59s)  kubelet          Node multinode-921619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m57s (x5 over 6m59s)  kubelet          Node multinode-921619-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m44s                  kubelet          Node multinode-921619-m02 status is now: NodeReady
	  Normal  Starting                 3m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)    kubelet          Node multinode-921619-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)    kubelet          Node multinode-921619-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)    kubelet          Node multinode-921619-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-921619-m02 status is now: NodeReady
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-921619-m02 event: Registered Node multinode-921619-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 9 23:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068160] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.296711] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.248153] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152333] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.595607] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 9 23:20] systemd-fstab-generator[515]: Ignoring "noauto" for root device
	[  +0.110275] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.177939] systemd-fstab-generator[757]: Ignoring "noauto" for root device
	[  +0.283811] systemd-fstab-generator[797]: Ignoring "noauto" for root device
	[  +0.102607] systemd-fstab-generator[808]: Ignoring "noauto" for root device
	[  +0.119183] systemd-fstab-generator[821]: Ignoring "noauto" for root device
	[  +1.614364] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +0.108535] systemd-fstab-generator[1018]: Ignoring "noauto" for root device
	[  +0.109440] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +0.115490] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
	[  +0.126198] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
	[ +12.033320] systemd-fstab-generator[1303]: Ignoring "noauto" for root device
	[  +0.384697] kauditd_printk_skb: 67 callbacks suppressed
	[ +18.373477] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [1f91aea24826] <==
	* {"level":"info","ts":"2023-10-09T23:20:23.086627Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-09T23:20:23.086634Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-09T23:20:23.087157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac switched to configuration voters=(2366053629920448428)"}
	{"level":"info","ts":"2023-10-09T23:20:23.087258Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","added-peer-id":"20d5e93d92ee8fac","added-peer-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2023-10-09T23:20:23.087324Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31f708155da0e645","local-member-id":"20d5e93d92ee8fac","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T23:20:23.087394Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T23:20:23.097313Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-09T23:20:23.099872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2023-10-09T23:20:23.109738Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2023-10-09T23:20:23.109925Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"20d5e93d92ee8fac","initial-advertise-peer-urls":["https://192.168.39.167:2380"],"listen-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.167:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-09T23:20:23.110076Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-09T23:20:24.76891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-09T23:20:24.76895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-09T23:20:24.769055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgPreVoteResp from 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2023-10-09T23:20:24.769071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became candidate at term 4"}
	{"level":"info","ts":"2023-10-09T23:20:24.769102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgVoteResp from 20d5e93d92ee8fac at term 4"}
	{"level":"info","ts":"2023-10-09T23:20:24.769145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became leader at term 4"}
	{"level":"info","ts":"2023-10-09T23:20:24.769155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20d5e93d92ee8fac elected leader 20d5e93d92ee8fac at term 4"}
	{"level":"info","ts":"2023-10-09T23:20:24.772214Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:20:24.772156Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"20d5e93d92ee8fac","local-member-attributes":"{Name:multinode-921619 ClientURLs:[https://192.168.39.167:2379]}","request-path":"/0/members/20d5e93d92ee8fac/attributes","cluster-id":"31f708155da0e645","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-09T23:20:24.77263Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:20:24.773493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-09T23:20:24.773759Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-09T23:20:24.773883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-09T23:20:24.774936Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.167:2379"}
	
	* 
	* ==> etcd [aa6841202730] <==
	* {"level":"info","ts":"2023-10-09T23:17:13.893491Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2023-10-09T23:17:15.044245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-09T23:17:15.044281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-09T23:17:15.04431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgPreVoteResp from 20d5e93d92ee8fac at term 2"}
	{"level":"info","ts":"2023-10-09T23:17:15.044323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became candidate at term 3"}
	{"level":"info","ts":"2023-10-09T23:17:15.044337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgVoteResp from 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2023-10-09T23:17:15.044345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became leader at term 3"}
	{"level":"info","ts":"2023-10-09T23:17:15.044358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20d5e93d92ee8fac elected leader 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2023-10-09T23:17:15.045669Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"20d5e93d92ee8fac","local-member-attributes":"{Name:multinode-921619 ClientURLs:[https://192.168.39.167:2379]}","request-path":"/0/members/20d5e93d92ee8fac/attributes","cluster-id":"31f708155da0e645","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-09T23:17:15.045983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:17:15.046063Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:17:15.047483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-09T23:17:15.04793Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-09T23:17:15.047969Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-09T23:17:15.047589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.167:2379"}
	{"level":"info","ts":"2023-10-09T23:19:17.434781Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-09T23:19:17.435048Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-921619","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"]}
	{"level":"warn","ts":"2023-10-09T23:19:17.435185Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-09T23:19:17.43528Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-09T23:19:17.473021Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.167:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-09T23:19:17.473058Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.167:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-09T23:19:17.473143Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"20d5e93d92ee8fac","current-leader-member-id":"20d5e93d92ee8fac"}
	{"level":"info","ts":"2023-10-09T23:19:17.482396Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2023-10-09T23:19:17.482508Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2023-10-09T23:19:17.482545Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-921619","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"]}
	
	* 
	* ==> kernel <==
	*  23:21:11 up 1 min,  0 users,  load average: 0.46, 0.17, 0.06
	Linux multinode-921619 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [abb0b4d9b2df] <==
	* I1009 23:20:31.469545       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1009 23:20:31.469770       1 main.go:107] hostIP = 192.168.39.167
	podIP = 192.168.39.167
	I1009 23:20:31.470183       1 main.go:116] setting mtu 1500 for CNI 
	I1009 23:20:31.470221       1 main.go:146] kindnetd IP family: "ipv4"
	I1009 23:20:31.470244       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1009 23:20:32.066056       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:20:32.066080       1 main.go:227] handling current node
	I1009 23:20:32.066211       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:20:32.066219       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:20:32.066355       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.121 Flags: [] Table: 0} 
	I1009 23:20:42.081254       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:20:42.081286       1 main.go:227] handling current node
	I1009 23:20:42.081303       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:20:42.081308       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:20:52.096367       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:20:52.097025       1 main.go:227] handling current node
	I1009 23:20:52.097077       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:20:52.097089       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:21:02.116579       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:21:02.116644       1 main.go:227] handling current node
	I1009 23:21:02.116662       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:21:02.116669       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kindnet [af05e798f2ed] <==
	* I1009 23:18:42.604937       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:18:42.605025       1 main.go:227] handling current node
	I1009 23:18:42.605044       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:18:42.605050       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:18:42.605383       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I1009 23:18:42.605479       1 main.go:250] Node multinode-921619-m03 has CIDR [10.244.3.0/24] 
	I1009 23:18:52.610659       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:18:52.610711       1 main.go:227] handling current node
	I1009 23:18:52.610731       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:18:52.610738       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:18:52.611186       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I1009 23:18:52.611220       1 main.go:250] Node multinode-921619-m03 has CIDR [10.244.2.0/24] 
	I1009 23:18:52.611321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.80 Flags: [] Table: 0} 
	I1009 23:19:02.625398       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:19:02.625450       1 main.go:227] handling current node
	I1009 23:19:02.625467       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:19:02.625473       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:19:02.625966       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I1009 23:19:02.626073       1 main.go:250] Node multinode-921619-m03 has CIDR [10.244.2.0/24] 
	I1009 23:19:12.639713       1 main.go:223] Handling node with IPs: map[192.168.39.167:{}]
	I1009 23:19:12.639779       1 main.go:227] handling current node
	I1009 23:19:12.639795       1 main.go:223] Handling node with IPs: map[192.168.39.121:{}]
	I1009 23:19:12.639805       1 main.go:250] Node multinode-921619-m02 has CIDR [10.244.1.0/24] 
	I1009 23:19:12.640402       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I1009 23:19:12.640843       1 main.go:250] Node multinode-921619-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [7e45cb6e61a0] <==
	* I1009 23:20:26.170251       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1009 23:20:26.170500       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1009 23:20:26.169682       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1009 23:20:26.306379       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 23:20:26.332548       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1009 23:20:26.366576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1009 23:20:26.366656       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 23:20:26.367217       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1009 23:20:26.367316       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1009 23:20:26.369723       1 shared_informer.go:318] Caches are synced for configmaps
	I1009 23:20:26.370279       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1009 23:20:26.370334       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 23:20:26.374712       1 aggregator.go:166] initial CRD sync complete...
	I1009 23:20:26.374723       1 autoregister_controller.go:141] Starting autoregister controller
	I1009 23:20:26.374727       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 23:20:26.374733       1 cache.go:39] Caches are synced for autoregister controller
	I1009 23:20:27.167566       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1009 23:20:27.589917       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.167]
	I1009 23:20:27.591354       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 23:20:27.599135       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 23:20:28.959281       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1009 23:20:29.099774       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 23:20:29.128712       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 23:20:29.240002       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 23:20:29.251661       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [ac1bbc7d4311] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 23:19:27.350106       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 23:19:27.361388       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 23:19:27.452192       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [c4a026affbb0] <==
	* I1009 23:20:38.771427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.657908ms"
	I1009 23:20:38.772003       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.03µs"
	I1009 23:20:38.776200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.174707ms"
	I1009 23:20:38.776498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.998µs"
	I1009 23:20:38.799908       1 shared_informer.go:318] Caches are synced for deployment
	I1009 23:20:38.846013       1 shared_informer.go:318] Caches are synced for daemon sets
	I1009 23:20:38.912751       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 23:20:38.921287       1 shared_informer.go:318] Caches are synced for taint
	I1009 23:20:38.921628       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1009 23:20:38.921757       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-921619"
	I1009 23:20:38.922052       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1009 23:20:38.922487       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-921619-m02"
	I1009 23:20:38.922388       1 taint_manager.go:211] "Sending events to api server"
	I1009 23:20:38.923493       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1009 23:20:38.923785       1 event.go:307] "Event occurred" object="multinode-921619" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-921619 event: Registered Node multinode-921619 in Controller"
	I1009 23:20:38.923951       1 event.go:307] "Event occurred" object="multinode-921619-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-921619-m02 event: Registered Node multinode-921619-m02 in Controller"
	I1009 23:20:38.943117       1 shared_informer.go:318] Caches are synced for resource quota
	I1009 23:20:39.275634       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 23:20:39.275684       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1009 23:20:39.279720       1 shared_informer.go:318] Caches are synced for garbage collector
	I1009 23:20:44.468474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.965µs"
	I1009 23:20:44.524273       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.897569ms"
	I1009 23:20:44.524378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.358µs"
	I1009 23:20:44.564351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.579076ms"
	I1009 23:20:44.564621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.046µs"
	
	* 
	* ==> kube-controller-manager [cb0e5b797b8d] <==
	* I1009 23:18:09.133235       1 event.go:307] "Event occurred" object="kube-system/kindnet-w7ch7" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1009 23:18:09.143164       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-6nfdb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1009 23:18:16.699639       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-921619-m02"
	I1009 23:18:18.462367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="91.183µs"
	I1009 23:18:19.157829       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-6xrrs" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-6xrrs"
	I1009 23:18:19.410649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.035µs"
	I1009 23:18:19.417420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.812µs"
	I1009 23:18:43.970346       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-k4jdx"
	I1009 23:18:43.981284       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="23.585919ms"
	I1009 23:18:43.993501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="12.148705ms"
	I1009 23:18:44.012038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="18.461655ms"
	I1009 23:18:44.012136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.962µs"
	I1009 23:18:45.832098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.840532ms"
	I1009 23:18:45.833163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="89.423µs"
	I1009 23:18:46.978446       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-921619-m02"
	I1009 23:18:47.847719       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-921619-m03\" does not exist"
	I1009 23:18:47.848066       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-921619-m02"
	I1009 23:18:47.850259       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-m9w29" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-m9w29"
	I1009 23:18:47.866154       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-921619-m03" podCIDRs=["10.244.2.0/24"]
	I1009 23:18:48.690197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.39µs"
	I1009 23:18:48.851979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="105.63µs"
	I1009 23:18:48.865918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.671µs"
	I1009 23:18:48.871303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.885µs"
	I1009 23:19:13.157563       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-921619-m02"
	I1009 23:19:15.586788       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-921619-m02"
	
	* 
	* ==> kube-proxy [0c38a812e625] <==
	* I1009 23:20:27.712728       1 server_others.go:69] "Using iptables proxy"
	I1009 23:20:27.731248       1 node.go:141] Successfully retrieved node IP: 192.168.39.167
	I1009 23:20:27.827989       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1009 23:20:27.828008       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 23:20:27.832972       1 server_others.go:152] "Using iptables Proxier"
	I1009 23:20:27.833014       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 23:20:27.833188       1 server.go:846] "Version info" version="v1.28.2"
	I1009 23:20:27.833200       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 23:20:27.835618       1 config.go:188] "Starting service config controller"
	I1009 23:20:27.836420       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 23:20:27.836447       1 config.go:315] "Starting node config controller"
	I1009 23:20:27.836452       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 23:20:27.838175       1 config.go:97] "Starting endpoint slice config controller"
	I1009 23:20:27.838185       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 23:20:27.936553       1 shared_informer.go:318] Caches are synced for node config
	I1009 23:20:27.936629       1 shared_informer.go:318] Caches are synced for service config
	I1009 23:20:27.938997       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [fbb07f20fa16] <==
	* I1009 23:17:18.189751       1 server_others.go:69] "Using iptables proxy"
	I1009 23:17:18.353976       1 node.go:141] Successfully retrieved node IP: 192.168.39.167
	I1009 23:17:18.631584       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1009 23:17:18.631832       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 23:17:18.650235       1 server_others.go:152] "Using iptables Proxier"
	I1009 23:17:18.654406       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 23:17:18.655544       1 server.go:846] "Version info" version="v1.28.2"
	I1009 23:17:18.655688       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 23:17:18.659621       1 config.go:188] "Starting service config controller"
	I1009 23:17:18.660455       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 23:17:18.663168       1 config.go:97] "Starting endpoint slice config controller"
	I1009 23:17:18.663273       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 23:17:18.664808       1 config.go:315] "Starting node config controller"
	I1009 23:17:18.665011       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 23:17:18.800979       1 shared_informer.go:318] Caches are synced for service config
	I1009 23:17:18.832828       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 23:17:18.871120       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2c47ae8aed1a] <==
	* I1009 23:17:14.216183       1 serving.go:348] Generated self-signed cert in-memory
	W1009 23:17:16.439061       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 23:17:16.441958       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	W1009 23:17:16.442073       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 23:17:16.442190       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 23:17:16.470254       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1009 23:17:16.470306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 23:17:16.476853       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 23:17:16.487134       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 23:17:16.487214       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 23:17:16.487231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 23:17:16.588402       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 23:19:17.370214       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1009 23:19:17.370298       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1009 23:19:17.370602       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f48e2b7e3977] <==
	* I1009 23:20:23.529604       1 serving.go:348] Generated self-signed cert in-memory
	W1009 23:20:26.266204       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 23:20:26.266420       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 23:20:26.266584       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 23:20:26.266649       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 23:20:26.310308       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1009 23:20:26.310639       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 23:20:26.315722       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 23:20:26.316196       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 23:20:26.319709       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1009 23:20:26.320058       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1009 23:20:26.417586       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-09 23:19:54 UTC, ends at Mon 2023-10-09 23:21:11 UTC. --
	Oct 09 23:20:27 multinode-921619 kubelet[1309]: E1009 23:20:27.154466    1309 projected.go:198] Error preparing data for projected volume kube-api-access-nqqpc for pod default/busybox-5bc68d56bd-pbmjv: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:27 multinode-921619 kubelet[1309]: E1009 23:20:27.154510    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc podName:a0d2acd2-5f22-466f-a3cf-b5a896f1eaba nodeName:}" failed. No retries permitted until 2023-10-09 23:20:28.154497768 +0000 UTC m=+8.045060882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqqpc" (UniqueName: "kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc") pod "busybox-5bc68d56bd-pbmjv" (UID: "a0d2acd2-5f22-466f-a3cf-b5a896f1eaba") : object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:28 multinode-921619 kubelet[1309]: E1009 23:20:28.062302    1309 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 23:20:28 multinode-921619 kubelet[1309]: E1009 23:20:28.062416    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume podName:2898e186-93b2-49f3-9e87-2f6c4f5619ef nodeName:}" failed. No retries permitted until 2023-10-09 23:20:30.062401963 +0000 UTC m=+9.952965062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume") pod "coredns-5dd5756b68-m56ds" (UID: "2898e186-93b2-49f3-9e87-2f6c4f5619ef") : object "kube-system"/"coredns" not registered
	Oct 09 23:20:28 multinode-921619 kubelet[1309]: E1009 23:20:28.163652    1309 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:28 multinode-921619 kubelet[1309]: E1009 23:20:28.163703    1309 projected.go:198] Error preparing data for projected volume kube-api-access-nqqpc for pod default/busybox-5bc68d56bd-pbmjv: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:28 multinode-921619 kubelet[1309]: E1009 23:20:28.163748    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc podName:a0d2acd2-5f22-466f-a3cf-b5a896f1eaba nodeName:}" failed. No retries permitted until 2023-10-09 23:20:30.163735327 +0000 UTC m=+10.054298424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqqpc" (UniqueName: "kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc") pod "busybox-5bc68d56bd-pbmjv" (UID: "a0d2acd2-5f22-466f-a3cf-b5a896f1eaba") : object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.082583    1309 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.082694    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume podName:2898e186-93b2-49f3-9e87-2f6c4f5619ef nodeName:}" failed. No retries permitted until 2023-10-09 23:20:34.082676345 +0000 UTC m=+13.973239443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume") pod "coredns-5dd5756b68-m56ds" (UID: "2898e186-93b2-49f3-9e87-2f6c4f5619ef") : object "kube-system"/"coredns" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.183440    1309 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.183465    1309 projected.go:198] Error preparing data for projected volume kube-api-access-nqqpc for pod default/busybox-5bc68d56bd-pbmjv: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.183542    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc podName:a0d2acd2-5f22-466f-a3cf-b5a896f1eaba nodeName:}" failed. No retries permitted until 2023-10-09 23:20:34.183527579 +0000 UTC m=+14.074090693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqqpc" (UniqueName: "kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc") pod "busybox-5bc68d56bd-pbmjv" (UID: "a0d2acd2-5f22-466f-a3cf-b5a896f1eaba") : object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: I1009 23:20:30.810373    1309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f110ad49a0fdcafca9876b02947033ab8257e5ee0e1cc83d588376dab9d9da9"
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: I1009 23:20:30.842189    1309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59dab2a3d8cf39e13da78db3d073ed6f67059fe743fc4d9a71da80c10f26a31a"
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.871259    1309 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-m56ds" podUID="2898e186-93b2-49f3-9e87-2f6c4f5619ef"
	Oct 09 23:20:30 multinode-921619 kubelet[1309]: E1009 23:20:30.873204    1309 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-pbmjv" podUID="a0d2acd2-5f22-466f-a3cf-b5a896f1eaba"
	Oct 09 23:20:32 multinode-921619 kubelet[1309]: E1009 23:20:32.456045    1309 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-m56ds" podUID="2898e186-93b2-49f3-9e87-2f6c4f5619ef"
	Oct 09 23:20:32 multinode-921619 kubelet[1309]: E1009 23:20:32.456763    1309 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-5bc68d56bd-pbmjv" podUID="a0d2acd2-5f22-466f-a3cf-b5a896f1eaba"
	Oct 09 23:20:33 multinode-921619 kubelet[1309]: I1009 23:20:33.135338    1309 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 09 23:20:34 multinode-921619 kubelet[1309]: E1009 23:20:34.114649    1309 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 09 23:20:34 multinode-921619 kubelet[1309]: E1009 23:20:34.115273    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume podName:2898e186-93b2-49f3-9e87-2f6c4f5619ef nodeName:}" failed. No retries permitted until 2023-10-09 23:20:42.115254962 +0000 UTC m=+22.005818070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2898e186-93b2-49f3-9e87-2f6c4f5619ef-config-volume") pod "coredns-5dd5756b68-m56ds" (UID: "2898e186-93b2-49f3-9e87-2f6c4f5619ef") : object "kube-system"/"coredns" not registered
	Oct 09 23:20:34 multinode-921619 kubelet[1309]: E1009 23:20:34.215635    1309 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:34 multinode-921619 kubelet[1309]: E1009 23:20:34.215696    1309 projected.go:198] Error preparing data for projected volume kube-api-access-nqqpc for pod default/busybox-5bc68d56bd-pbmjv: object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:34 multinode-921619 kubelet[1309]: E1009 23:20:34.215772    1309 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc podName:a0d2acd2-5f22-466f-a3cf-b5a896f1eaba nodeName:}" failed. No retries permitted until 2023-10-09 23:20:42.215756254 +0000 UTC m=+22.106319367 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqqpc" (UniqueName: "kubernetes.io/projected/a0d2acd2-5f22-466f-a3cf-b5a896f1eaba-kube-api-access-nqqpc") pod "busybox-5bc68d56bd-pbmjv" (UID: "a0d2acd2-5f22-466f-a3cf-b5a896f1eaba") : object "default"/"kube-root-ca.crt" not registered
	Oct 09 23:20:43 multinode-921619 kubelet[1309]: I1009 23:20:43.423553    1309 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5111f4fb441348620927d143bfe29f1a414c6350dbb44278d1b01b2c3f47ea3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-921619 -n multinode-921619
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-921619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (90.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (15.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 : exit status 70 (12.09005346s)

                                                
                                                
-- stdout --
	! [running-upgrade-425581] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig3757065216
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading driver docker-machine-driver-kvm2:
	* Downloading VM boot image ...
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 79.43 KiB / 13.86 MiB [>______] 0.56% ? p/s ?    > docker-machine-driver-kvm2: 127.43 KiB / 13.86 MiB [>_____] 0.90% ? p/s ?    > docker-machine-driver-kvm2: 223.43 KiB / 13.86 MiB [>_____] 1.57% ? p/s ?    > docker-machine-driver-kvm2: 335.43 KiB / 13.86 MiB  2.36% 426.65 KiB p/s     > docker-machine-driver-kvm2: 431.43 KiB / 13.86 MiB  3.04% 426.65 KiB p/s     > docker-machine-driver-kvm2: 495.43 KiB / 13.86 MiB  3.49% 426.65 KiB p/s     > docker-machine-driver-kvm2: 607.43 KiB / 13.86 MiB  4.28% 428.37 KiB p/s     > docker-machine-driver-kvm2: 703.43 KiB / 13.86 MiB  4.96% 428.37 KiB p/s     > docker-machine-driver-kvm2: 799.43 KiB / 13.86 MiB  5.63% 428.37 KiB p/s     > docker-machine-driver-kvm2: 879.43 KiB / 13.86 MiB  6.20% 429.98 KiB p/s     > docker-machine-driver-kvm2: 975.43 KiB / 13.86 MiB  6.87% 429.98 KiB p/s     > docker-machine-driver-kvm2: 1.04 MiB / 13.86 MiB  7.53% 429.9
8 KiB p/s ET    > docker-machine-driver-kvm2: 1.14 MiB / 13.86 MiB  8.21% 432.93 KiB p/s ET    > docker-machine-driver-kvm2: 1.23 MiB / 13.86 MiB  8.84% 432.93 KiB p/s ET    > docker-machine-driver-kvm2: 1.30 MiB / 13.86 MiB  9.41% 432.93 KiB p/s ET    > docker-machine-driver-kvm2: 1.40 MiB / 13.86 MiB  10.08% 433.58 KiB p/s E    > docker-machine-driver-kvm2: 1.48 MiB / 13.86 MiB  10.71% 433.58 KiB p/s E    > docker-machine-driver-kvm2: 1.58 MiB / 13.86 MiB  11.39% 433.58 KiB p/s E    > docker-machine-driver-kvm2: 1.66 MiB / 13.86 MiB  11.95% 434.19 KiB p/s E    > docker-machine-driver-kvm2: 1.74 MiB / 13.86 MiB  12.59% 434.19 KiB p/s E    > docker-machine-driver-kvm2: 1.84 MiB / 13.86 MiB  13.26% 434.19 KiB p/s E    > docker-machine-driver-kvm2: 1.93 MiB / 13.86 MiB  13.94% 436.48 KiB p/s E    > docker-machine-driver-kvm2: 2.02 MiB / 13.86 MiB  14.57% 436.48 KiB p/s E    > docker-machine-driver-kvm2: 2.13 MiB / 13.86 MiB  15.36% 436.48 KiB p/s E    > docker-machine-driver-kvm2: 2.29 MiB / 13.86 MiB  16.49% 4
47.22 KiB p/s E    > docker-machine-driver-kvm2: 2.52 MiB / 13.86 MiB  18.18% 447.22 KiB p/s E    > docker-machine-driver-kvm2: 2.87 MiB / 13.86 MiB  20.73% 447.22 KiB p/s E    > docker-machine-driver-kvm2: 3.45 MiB / 13.86 MiB  24.92% 547.09 KiB p/s E    > docker-machine-driver-kvm2: 4.23 MiB / 13.86 MiB  30.55% 547.09 KiB p/s E    > docker-machine-driver-kvm2: 5.77 MiB / 13.86 MiB  41.60% 547.09 KiB p/s E    > docker-machine-driver-kvm2: 8.02 MiB / 13.86 MiB  57.84% 1014.03 KiB p/s     > docker-machine-driver-kvm2: 11.56 MiB / 13.86 MiB  83.39% 1014.03 KiB p/s    > docker-machine-driver-kvm2: 13.86 MiB / 13.86 MiB  100.00% 2.20 MiB p/s 7    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 12.19 MiB / 150.93 MiB [->___________] 8.07% ? p/s ?    > minikube-v1.6.0.iso: 27.64 MiB / 150.93 MiB [-->_________] 18.31% ? p/s ?    > minikube-v1.6.0.iso: 48.00 MiB / 150.93 MiB [--->________] 31.80% ? p/s ?    > minikube-v1.6.0.iso: 66.10 MiB / 150.93 MiB [ 43.80% 89
.86 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 82.55 MiB / 150.93 MiB [ 54.70% 89.86 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 105.59 MiB / 150.93 MiB  69.96% 89.86 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 128.01 MiB / 150.93 MiB  84.81% 90.71 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 113.89 MiB p/s 2s* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible: create: Error creating machine: Error in driver during machine creation: creating network: creating network minikube-net: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 : exit status 78 (124.517237ms)

                                                
                                                
-- stdout --
	* [running-upgrade-425581] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig203016892
	* Selecting 'kvm2' driver from user configuration (alternates: [none])

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible
	* Error: [KVM2_NO_DOMAIN] Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'running-upgrade-425581'')
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Related issues:
	  - https://github.com/kubernetes/minikube/issues/3636
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.871904787.exe start -p running-upgrade-425581 --memory=2200 --vm-driver=kvm2 : exit status 78 (113.941038ms)

                                                
                                                
-- stdout --
	* [running-upgrade-425581] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1940189611
	* Selecting 'kvm2' driver from user configuration (alternates: [none])

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible
	* Error: [KVM2_NO_DOMAIN] Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'running-upgrade-425581'')
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Related issues:
	  - https://github.com/kubernetes/minikube/issues/3636
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:139: legacy v1.6.2 start failed: exit status 78
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-09 23:29:38.957240632 +0000 UTC m=+2111.801294959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-425581 -n running-upgrade-425581
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-425581 -n running-upgrade-425581: exit status 85 (76.794669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node running-upgrade-425581
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-425581" host is not running, skipping log retrieval (state="")
helpers_test.go:175: Cleaning up "running-upgrade-425581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-425581
--- FAIL: TestRunningBinaryUpgrade (15.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-757458 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-757458 "sudo crictl images -o json": exit status 1 (268.702673ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-757458 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757458 -n old-k8s-version-757458
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-757458 logs -n 25
E1009 23:48:00.833847   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-757458 logs -n 25: (1.201401848s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-516009 sudo cat                              | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:46 UTC | 09 Oct 23 23:46 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo docker                           | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:46 UTC | 09 Oct 23 23:46 UTC |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:46 UTC | 09 Oct 23 23:46 UTC |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:46 UTC | 09 Oct 23 23:46 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo cat                              | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo cat                              | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo                                  | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo cat                              | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo cat                              | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo containerd                       | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC |                     |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo systemctl                        | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo find                             | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-516009 sudo crio                             | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-516009                                       | auto-516009               | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	| start   | -p flannel-516009                                    | flannel-516009            | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |         |                     |                     |
	| ssh     | -p newest-cni-077416 sudo                            | newest-cni-077416         | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | crictl images -o json                                |                           |         |         |                     |                     |
	| pause   | -p newest-cni-077416                                 | newest-cni-077416         | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	| unpause | -p newest-cni-077416                                 | newest-cni-077416         | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	| delete  | -p newest-cni-077416                                 | newest-cni-077416         | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	| delete  | -p newest-cni-077416                                 | newest-cni-077416         | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC | 09 Oct 23 23:47 UTC |
	| start   | -p enable-default-cni-516009                         | enable-default-cni-516009 | jenkins | v1.31.2 | 09 Oct 23 23:47 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	| ssh     | -p old-k8s-version-757458 sudo                       | old-k8s-version-757458    | jenkins | v1.31.2 | 09 Oct 23 23:48 UTC |                     |
	|         | crictl images -o json                                |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 23:47:21
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 23:47:21.085515  121077 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:47:21.085659  121077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:47:21.085670  121077 out.go:309] Setting ErrFile to fd 2...
	I1009 23:47:21.085675  121077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:47:21.085922  121077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:47:21.086543  121077 out.go:303] Setting JSON to false
	I1009 23:47:21.087578  121077 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":12588,"bootTime":1696882653,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 23:47:21.087639  121077 start.go:138] virtualization: kvm guest
	I1009 23:47:21.090030  121077 out.go:177] * [enable-default-cni-516009] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 23:47:21.091499  121077 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:47:21.092838  121077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:47:21.091523  121077 notify.go:220] Checking for updates...
	I1009 23:47:21.095535  121077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:47:21.096905  121077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:47:21.098160  121077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 23:47:21.099501  121077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:47:21.101370  121077 config.go:182] Loaded profile config "default-k8s-diff-port-468042": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:47:21.101487  121077 config.go:182] Loaded profile config "flannel-516009": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:47:21.101577  121077 config.go:182] Loaded profile config "old-k8s-version-757458": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1009 23:47:21.101647  121077 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:47:21.138108  121077 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 23:47:21.139545  121077 start.go:298] selected driver: kvm2
	I1009 23:47:21.139557  121077 start.go:902] validating driver "kvm2" against <nil>
	I1009 23:47:21.139566  121077 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:47:21.140310  121077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:47:21.140378  121077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17375-78415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 23:47:21.155178  121077 install.go:137] /home/jenkins/minikube-integration/17375-78415/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1009 23:47:21.155235  121077 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	E1009 23:47:21.155399  121077 start_flags.go:457] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1009 23:47:21.155421  121077 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:47:21.155476  121077 cni.go:84] Creating CNI manager for "bridge"
	I1009 23:47:21.155488  121077 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 23:47:21.155496  121077 start_flags.go:323] config:
	{Name:enable-default-cni-516009 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-516009 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:47:21.155618  121077 iso.go:125] acquiring lock: {Name:mk8f0545fb1f7801479f5eb65fbe7d8f303a99cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:47:21.158301  121077 out.go:177] * Starting control plane node enable-default-cni-516009 in cluster enable-default-cni-516009
	I1009 23:47:18.988476  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:21.487508  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:19.774879  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:19.775370  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find current IP address of domain flannel-516009 in network mk-flannel-516009
	I1009 23:47:19.775387  120457 main.go:141] libmachine: (flannel-516009) DBG | I1009 23:47:19.775310  120479 retry.go:31] will retry after 3.186135073s: waiting for machine to come up
	I1009 23:47:22.963541  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:22.964143  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find current IP address of domain flannel-516009 in network mk-flannel-516009
	I1009 23:47:22.964170  120457 main.go:141] libmachine: (flannel-516009) DBG | I1009 23:47:22.964103  120479 retry.go:31] will retry after 3.670971956s: waiting for machine to come up
	I1009 23:47:20.195463  115309 system_pods.go:86] 4 kube-system pods found
	I1009 23:47:20.195487  115309 system_pods.go:89] "coredns-5644d7b6d9-w2qqz" [22ba58b1-12d6-49e9-a3b8-9394f4f1b97d] Running
	I1009 23:47:20.195492  115309 system_pods.go:89] "kube-proxy-8ngv2" [186fef3d-bb2d-4ce3-bce1-a59e12fc7df3] Running
	I1009 23:47:20.195499  115309 system_pods.go:89] "metrics-server-74d5856cc6-zls5b" [f7adcc12-6ddd-42f7-8b3c-ecafb27627e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 23:47:20.195504  115309 system_pods.go:89] "storage-provisioner" [9eff148f-8409-45b8-912a-fc1a9a1f00d7] Running
	I1009 23:47:20.195520  115309 retry.go:31] will retry after 12.771368347s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1009 23:47:21.159728  121077 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:47:21.159755  121077 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1009 23:47:21.159774  121077 cache.go:57] Caching tarball of preloaded images
	I1009 23:47:21.159852  121077 preload.go:174] Found /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1009 23:47:21.159868  121077 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1009 23:47:21.159968  121077 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/enable-default-cni-516009/config.json ...
	I1009 23:47:21.159995  121077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/enable-default-cni-516009/config.json: {Name:mke913c571c9ef00231c037612450b429dee212b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:21.160150  121077 start.go:365] acquiring machines lock for enable-default-cni-516009: {Name:mk4d06451f08f4d0dfbc191a7a07492b6e7c9c1f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 23:47:23.986644  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:25.986849  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:26.636591  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:26.637149  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find current IP address of domain flannel-516009 in network mk-flannel-516009
	I1009 23:47:26.637173  120457 main.go:141] libmachine: (flannel-516009) DBG | I1009 23:47:26.637100  120479 retry.go:31] will retry after 4.622746533s: waiting for machine to come up
	I1009 23:47:28.487096  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:30.488759  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:31.262525  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:31.262988  120457 main.go:141] libmachine: (flannel-516009) Found IP for machine: 192.168.50.84
	I1009 23:47:31.263020  120457 main.go:141] libmachine: (flannel-516009) Reserving static IP address...
	I1009 23:47:31.263036  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has current primary IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:31.263294  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find host DHCP lease matching {name: "flannel-516009", mac: "52:54:00:61:6f:27", ip: "192.168.50.84"} in network mk-flannel-516009
	I1009 23:47:31.338072  120457 main.go:141] libmachine: (flannel-516009) DBG | Getting to WaitForSSH function...
	I1009 23:47:31.338105  120457 main.go:141] libmachine: (flannel-516009) Reserved static IP address: 192.168.50.84
	I1009 23:47:31.338119  120457 main.go:141] libmachine: (flannel-516009) Waiting for SSH to be available...
	I1009 23:47:31.340866  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:31.341305  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009
	I1009 23:47:31.341342  120457 main.go:141] libmachine: (flannel-516009) DBG | unable to find defined IP address of network mk-flannel-516009 interface with MAC address 52:54:00:61:6f:27
	I1009 23:47:31.341445  120457 main.go:141] libmachine: (flannel-516009) DBG | Using SSH client type: external
	I1009 23:47:31.341481  120457 main.go:141] libmachine: (flannel-516009) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa (-rw-------)
	I1009 23:47:31.341525  120457 main.go:141] libmachine: (flannel-516009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:47:31.341544  120457 main.go:141] libmachine: (flannel-516009) DBG | About to run SSH command:
	I1009 23:47:31.341563  120457 main.go:141] libmachine: (flannel-516009) DBG | exit 0
	I1009 23:47:31.344963  120457 main.go:141] libmachine: (flannel-516009) DBG | SSH cmd err, output: exit status 255: 
	I1009 23:47:31.344983  120457 main.go:141] libmachine: (flannel-516009) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1009 23:47:31.344990  120457 main.go:141] libmachine: (flannel-516009) DBG | command : exit 0
	I1009 23:47:31.344999  120457 main.go:141] libmachine: (flannel-516009) DBG | err     : exit status 255
	I1009 23:47:31.345007  120457 main.go:141] libmachine: (flannel-516009) DBG | output  : 
	I1009 23:47:32.973874  115309 system_pods.go:86] 5 kube-system pods found
	I1009 23:47:32.973900  115309 system_pods.go:89] "coredns-5644d7b6d9-w2qqz" [22ba58b1-12d6-49e9-a3b8-9394f4f1b97d] Running
	I1009 23:47:32.973905  115309 system_pods.go:89] "kube-apiserver-old-k8s-version-757458" [c37984e8-b73c-49e9-9364-d2bf776be636] Pending
	I1009 23:47:32.973909  115309 system_pods.go:89] "kube-proxy-8ngv2" [186fef3d-bb2d-4ce3-bce1-a59e12fc7df3] Running
	I1009 23:47:32.973916  115309 system_pods.go:89] "metrics-server-74d5856cc6-zls5b" [f7adcc12-6ddd-42f7-8b3c-ecafb27627e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 23:47:32.973921  115309 system_pods.go:89] "storage-provisioner" [9eff148f-8409-45b8-912a-fc1a9a1f00d7] Running
	I1009 23:47:32.973939  115309 retry.go:31] will retry after 16.470345423s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1009 23:47:36.675527  121077 start.go:369] acquired machines lock for "enable-default-cni-516009" in 15.515331032s
	I1009 23:47:36.675602  121077 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-516009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-516009 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1009 23:47:36.675725  121077 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 23:47:32.986664  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:34.988252  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:36.989863  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:34.345736  120457 main.go:141] libmachine: (flannel-516009) DBG | Getting to WaitForSSH function...
	I1009 23:47:34.348623  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.349051  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:34.349085  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.349317  120457 main.go:141] libmachine: (flannel-516009) DBG | Using SSH client type: external
	I1009 23:47:34.349367  120457 main.go:141] libmachine: (flannel-516009) DBG | Using SSH private key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa (-rw-------)
	I1009 23:47:34.349409  120457 main.go:141] libmachine: (flannel-516009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 23:47:34.349425  120457 main.go:141] libmachine: (flannel-516009) DBG | About to run SSH command:
	I1009 23:47:34.349435  120457 main.go:141] libmachine: (flannel-516009) DBG | exit 0
	I1009 23:47:34.446180  120457 main.go:141] libmachine: (flannel-516009) DBG | SSH cmd err, output: <nil>: 
	I1009 23:47:34.446441  120457 main.go:141] libmachine: (flannel-516009) KVM machine creation complete!
	I1009 23:47:34.446807  120457 main.go:141] libmachine: (flannel-516009) Calling .GetConfigRaw
	I1009 23:47:34.447401  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:34.447611  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:34.447827  120457 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 23:47:34.447844  120457 main.go:141] libmachine: (flannel-516009) Calling .GetState
	I1009 23:47:34.449391  120457 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 23:47:34.449409  120457 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 23:47:34.449419  120457 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 23:47:34.449427  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:34.451738  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.452327  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:34.452356  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.452549  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:34.452734  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.452882  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.453045  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:34.453195  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:34.453579  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:34.453594  120457 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 23:47:34.581475  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:47:34.581499  120457 main.go:141] libmachine: Detecting the provisioner...
	I1009 23:47:34.581508  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:34.584273  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.584698  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:34.584732  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.584826  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:34.585058  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.585233  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.585385  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:34.585544  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:34.585891  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:34.585904  120457 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 23:47:34.715292  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1009 23:47:34.715385  120457 main.go:141] libmachine: found compatible host: buildroot
	I1009 23:47:34.715396  120457 main.go:141] libmachine: Provisioning with buildroot...
	I1009 23:47:34.715404  120457 main.go:141] libmachine: (flannel-516009) Calling .GetMachineName
	I1009 23:47:34.715664  120457 buildroot.go:166] provisioning hostname "flannel-516009"
	I1009 23:47:34.715697  120457 main.go:141] libmachine: (flannel-516009) Calling .GetMachineName
	I1009 23:47:34.715865  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:34.718322  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.718690  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:34.718729  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.718878  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:34.719047  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.719231  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.719375  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:34.719498  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:34.719838  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:34.719857  120457 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-516009 && echo "flannel-516009" | sudo tee /etc/hostname
	I1009 23:47:34.861216  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-516009
	
	I1009 23:47:34.861246  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:34.864155  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.864451  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:34.864469  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:34.864676  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:34.864880  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.865066  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:34.865250  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:34.865418  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:34.865772  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:34.865792  120457 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-516009' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-516009/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-516009' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:47:35.003417  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:47:35.003449  120457 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17375-78415/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-78415/.minikube}
	I1009 23:47:35.003478  120457 buildroot.go:174] setting up certificates
	I1009 23:47:35.003490  120457 provision.go:83] configureAuth start
	I1009 23:47:35.003509  120457 main.go:141] libmachine: (flannel-516009) Calling .GetMachineName
	I1009 23:47:35.003786  120457 main.go:141] libmachine: (flannel-516009) Calling .GetIP
	I1009 23:47:35.006669  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.007062  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.007104  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.007262  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:35.009528  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.009919  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.009941  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.010116  120457 provision.go:138] copyHostCerts
	I1009 23:47:35.010161  120457 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem, removing ...
	I1009 23:47:35.010175  120457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem
	I1009 23:47:35.010241  120457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/ca.pem (1082 bytes)
	I1009 23:47:35.010317  120457 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem, removing ...
	I1009 23:47:35.010325  120457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem
	I1009 23:47:35.010347  120457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/cert.pem (1123 bytes)
	I1009 23:47:35.010392  120457 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem, removing ...
	I1009 23:47:35.010399  120457 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem
	I1009 23:47:35.010429  120457 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-78415/.minikube/key.pem (1679 bytes)
	I1009 23:47:35.010497  120457 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem org=jenkins.flannel-516009 san=[192.168.50.84 192.168.50.84 localhost 127.0.0.1 minikube flannel-516009]
	I1009 23:47:35.163434  120457 provision.go:172] copyRemoteCerts
	I1009 23:47:35.163488  120457 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:47:35.163514  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:35.166414  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.166797  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.166838  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.167013  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:35.167169  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.167292  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:35.167455  120457 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa Username:docker}
	I1009 23:47:35.260137  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 23:47:35.282491  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 23:47:35.303827  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:47:35.324881  120457 provision.go:86] duration metric: configureAuth took 321.374321ms
	I1009 23:47:35.324909  120457 buildroot.go:189] setting minikube options for container-runtime
	I1009 23:47:35.325111  120457 config.go:182] Loaded profile config "flannel-516009": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:47:35.325144  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:35.325415  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:35.328132  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.328509  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.328543  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.328699  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:35.328889  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.329028  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.329150  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:35.329301  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:35.329618  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:35.329632  120457 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1009 23:47:35.460317  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1009 23:47:35.460344  120457 buildroot.go:70] root file system type: tmpfs
	I1009 23:47:35.460516  120457 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1009 23:47:35.460552  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:35.463361  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.463731  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.463759  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.463931  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:35.464134  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.464308  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.464494  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:35.464665  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:35.464991  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:35.465071  120457 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1009 23:47:35.603606  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1009 23:47:35.603642  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:35.606595  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.607001  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:35.607030  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:35.607178  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:35.607384  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.607572  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:35.607708  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:35.607855  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:35.608167  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:35.608185  120457 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1009 23:47:36.405354  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1009 23:47:36.405383  120457 main.go:141] libmachine: Checking connection to Docker...
	I1009 23:47:36.405397  120457 main.go:141] libmachine: (flannel-516009) Calling .GetURL
	I1009 23:47:36.406734  120457 main.go:141] libmachine: (flannel-516009) DBG | Using libvirt version 6000000
	I1009 23:47:36.408814  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.409115  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.409168  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.409298  120457 main.go:141] libmachine: Docker is up and running!
	I1009 23:47:36.409314  120457 main.go:141] libmachine: Reticulating splines...
	I1009 23:47:36.409322  120457 client.go:171] LocalClient.Create took 32.363679803s
	I1009 23:47:36.409350  120457 start.go:167] duration metric: libmachine.API.Create for "flannel-516009" took 32.363745454s
	I1009 23:47:36.409364  120457 start.go:300] post-start starting for "flannel-516009" (driver="kvm2")
	I1009 23:47:36.409376  120457 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:47:36.409401  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:36.409631  120457 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:47:36.409657  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:36.411644  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.411973  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.412027  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.412209  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:36.412380  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:36.412537  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:36.412669  120457 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa Username:docker}
	I1009 23:47:36.503537  120457 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:47:36.507696  120457 info.go:137] Remote host: Buildroot 2021.02.12
	I1009 23:47:36.507725  120457 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/addons for local assets ...
	I1009 23:47:36.507776  120457 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-78415/.minikube/files for local assets ...
	I1009 23:47:36.507851  120457 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem -> 856012.pem in /etc/ssl/certs
	I1009 23:47:36.507935  120457 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:47:36.515700  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:47:36.538829  120457 start.go:303] post-start completed in 129.451422ms
	I1009 23:47:36.538882  120457 main.go:141] libmachine: (flannel-516009) Calling .GetConfigRaw
	I1009 23:47:36.539438  120457 main.go:141] libmachine: (flannel-516009) Calling .GetIP
	I1009 23:47:36.541917  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.542345  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.542379  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.542635  120457 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/config.json ...
	I1009 23:47:36.542809  120457 start.go:128] duration metric: createHost completed in 32.514539684s
	I1009 23:47:36.542829  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:36.544848  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.545122  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.545144  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.545311  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:36.545468  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:36.545636  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:36.545774  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:36.545937  120457 main.go:141] libmachine: Using SSH client type: native
	I1009 23:47:36.546241  120457 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8560] 0x7fb240 <nil>  [] 0s} 192.168.50.84 22 <nil> <nil>}
	I1009 23:47:36.546252  120457 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1009 23:47:36.675332  120457 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696895256.666293578
	
	I1009 23:47:36.675356  120457 fix.go:206] guest clock: 1696895256.666293578
	I1009 23:47:36.675365  120457 fix.go:219] Guest: 2023-10-09 23:47:36.666293578 +0000 UTC Remote: 2023-10-09 23:47:36.542818823 +0000 UTC m=+32.639216131 (delta=123.474755ms)
	I1009 23:47:36.675425  120457 fix.go:190] guest clock delta is within tolerance: 123.474755ms
	I1009 23:47:36.675437  120457 start.go:83] releasing machines lock for "flannel-516009", held for 32.647271793s
	I1009 23:47:36.675465  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:36.675756  120457 main.go:141] libmachine: (flannel-516009) Calling .GetIP
	I1009 23:47:36.678891  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.679255  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.679287  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.679438  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:36.680108  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:36.680308  120457 main.go:141] libmachine: (flannel-516009) Calling .DriverName
	I1009 23:47:36.680421  120457 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:47:36.680464  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:36.680576  120457 ssh_runner.go:195] Run: cat /version.json
	I1009 23:47:36.680605  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHHostname
	I1009 23:47:36.683079  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.683278  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.683431  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.683473  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.683573  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:36.683680  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:36.683709  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:36.683734  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:36.683892  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHPort
	I1009 23:47:36.683914  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:36.684075  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHKeyPath
	I1009 23:47:36.684077  120457 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa Username:docker}
	I1009 23:47:36.684179  120457 main.go:141] libmachine: (flannel-516009) Calling .GetSSHUsername
	I1009 23:47:36.684328  120457 sshutil.go:53] new ssh client: &{IP:192.168.50.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/flannel-516009/id_rsa Username:docker}
	I1009 23:47:36.775646  120457 ssh_runner.go:195] Run: systemctl --version
	I1009 23:47:36.800879  120457 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 23:47:36.807238  120457 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 23:47:36.807319  120457 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:47:36.823083  120457 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:47:36.823113  120457 start.go:472] detecting cgroup driver to use...
	I1009 23:47:36.823251  120457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:47:36.843235  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1009 23:47:36.854556  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1009 23:47:36.866195  120457 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1009 23:47:36.866265  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1009 23:47:36.878992  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:47:36.890508  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1009 23:47:36.902229  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1009 23:47:36.913976  120457 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:47:36.926054  120457 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1009 23:47:36.936815  120457 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:47:36.947366  120457 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:47:36.957402  120457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:47:37.078640  120457 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1009 23:47:37.097700  120457 start.go:472] detecting cgroup driver to use...
	I1009 23:47:37.097780  120457 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1009 23:47:37.117475  120457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:47:37.130929  120457 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:47:37.148787  120457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:47:37.160622  120457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:47:37.174728  120457 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1009 23:47:37.204753  120457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1009 23:47:37.218220  120457 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:47:37.236159  120457 ssh_runner.go:195] Run: which cri-dockerd
	I1009 23:47:37.240268  120457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1009 23:47:37.250131  120457 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1009 23:47:37.267794  120457 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1009 23:47:37.381719  120457 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1009 23:47:37.509120  120457 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1009 23:47:37.509291  120457 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1009 23:47:37.527628  120457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:47:37.633935  120457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:47:39.671975  120457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.037996981s)
	I1009 23:47:39.672048  120457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:47:39.791329  120457 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1009 23:47:39.923438  120457 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1009 23:47:40.051478  120457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:47:40.171489  120457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1009 23:47:40.188675  120457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:47:40.307387  120457 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1009 23:47:40.394656  120457 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1009 23:47:40.394751  120457 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1009 23:47:40.403198  120457 start.go:540] Will wait 60s for crictl version
	I1009 23:47:40.403269  120457 ssh_runner.go:195] Run: which crictl
	I1009 23:47:40.407789  120457 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:47:40.467975  120457 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1009 23:47:40.468055  120457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:47:40.502733  120457 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1009 23:47:36.677786  121077 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 23:47:36.678006  121077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17375-78415/.minikube/bin/docker-machine-driver-kvm2
	I1009 23:47:36.678050  121077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:47:36.694508  121077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
	I1009 23:47:36.694885  121077 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:47:36.695460  121077 main.go:141] libmachine: Using API Version  1
	I1009 23:47:36.695486  121077 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:47:36.695802  121077 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:47:36.695988  121077 main.go:141] libmachine: (enable-default-cni-516009) Calling .GetMachineName
	I1009 23:47:36.696147  121077 main.go:141] libmachine: (enable-default-cni-516009) Calling .DriverName
	I1009 23:47:36.696328  121077 start.go:159] libmachine.API.Create for "enable-default-cni-516009" (driver="kvm2")
	I1009 23:47:36.696359  121077 client.go:168] LocalClient.Create starting
	I1009 23:47:36.696385  121077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem
	I1009 23:47:36.696417  121077 main.go:141] libmachine: Decoding PEM data...
	I1009 23:47:36.696435  121077 main.go:141] libmachine: Parsing certificate...
	I1009 23:47:36.696487  121077 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem
	I1009 23:47:36.696505  121077 main.go:141] libmachine: Decoding PEM data...
	I1009 23:47:36.696520  121077 main.go:141] libmachine: Parsing certificate...
	I1009 23:47:36.696536  121077 main.go:141] libmachine: Running pre-create checks...
	I1009 23:47:36.696545  121077 main.go:141] libmachine: (enable-default-cni-516009) Calling .PreCreateCheck
	I1009 23:47:36.696970  121077 main.go:141] libmachine: (enable-default-cni-516009) Calling .GetConfigRaw
	I1009 23:47:36.697384  121077 main.go:141] libmachine: Creating machine...
	I1009 23:47:36.697400  121077 main.go:141] libmachine: (enable-default-cni-516009) Calling .Create
	I1009 23:47:36.697536  121077 main.go:141] libmachine: (enable-default-cni-516009) Creating KVM machine...
	I1009 23:47:36.698757  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | found existing default KVM network
	I1009 23:47:36.700452  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:36.700263  121174 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:be:a3} reservation:<nil>}
	I1009 23:47:36.701815  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:36.701726  121174 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:54:90:33} reservation:<nil>}
	I1009 23:47:36.702606  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:36.702536  121174 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:7e:0b} reservation:<nil>}
	I1009 23:47:36.703678  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:36.703576  121174 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002cd9c0}
	I1009 23:47:36.708938  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | trying to create private KVM network mk-enable-default-cni-516009 192.168.72.0/24...
	I1009 23:47:36.786482  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | private KVM network mk-enable-default-cni-516009 192.168.72.0/24 created
	I1009 23:47:36.786528  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:36.786427  121174 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:47:36.786668  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting up store path in /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009 ...
	I1009 23:47:36.786702  121077 main.go:141] libmachine: (enable-default-cni-516009) Building disk image from file:///home/jenkins/minikube-integration/17375-78415/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1009 23:47:36.786733  121077 main.go:141] libmachine: (enable-default-cni-516009) Downloading /home/jenkins/minikube-integration/17375-78415/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17375-78415/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1009 23:47:37.046239  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:37.046109  121174 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009/id_rsa...
	I1009 23:47:37.181486  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:37.181356  121174 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009/enable-default-cni-516009.rawdisk...
	I1009 23:47:37.181518  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Writing magic tar header
	I1009 23:47:37.181543  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Writing SSH key tar header
	I1009 23:47:37.181564  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:37.181475  121174 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009 ...
	I1009 23:47:37.181598  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009
	I1009 23:47:37.181627  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17375-78415/.minikube/machines
	I1009 23:47:37.181646  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009 (perms=drwx------)
	I1009 23:47:37.181663  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:47:37.181683  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17375-78415
	I1009 23:47:37.181699  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 23:47:37.181721  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home/jenkins
	I1009 23:47:37.181745  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Checking permissions on dir: /home
	I1009 23:47:37.181774  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins/minikube-integration/17375-78415/.minikube/machines (perms=drwxr-xr-x)
	I1009 23:47:37.181803  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins/minikube-integration/17375-78415/.minikube (perms=drwxr-xr-x)
	I1009 23:47:37.181821  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins/minikube-integration/17375-78415 (perms=drwxrwxr-x)
	I1009 23:47:37.181831  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | Skipping /home - not owner
	I1009 23:47:37.181843  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 23:47:37.181856  121077 main.go:141] libmachine: (enable-default-cni-516009) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 23:47:37.181869  121077 main.go:141] libmachine: (enable-default-cni-516009) Creating domain...
	I1009 23:47:37.182938  121077 main.go:141] libmachine: (enable-default-cni-516009) define libvirt domain using xml: 
	I1009 23:47:37.182963  121077 main.go:141] libmachine: (enable-default-cni-516009) <domain type='kvm'>
	I1009 23:47:37.182977  121077 main.go:141] libmachine: (enable-default-cni-516009)   <name>enable-default-cni-516009</name>
	I1009 23:47:37.182996  121077 main.go:141] libmachine: (enable-default-cni-516009)   <memory unit='MiB'>3072</memory>
	I1009 23:47:37.183011  121077 main.go:141] libmachine: (enable-default-cni-516009)   <vcpu>2</vcpu>
	I1009 23:47:37.183024  121077 main.go:141] libmachine: (enable-default-cni-516009)   <features>
	I1009 23:47:37.183039  121077 main.go:141] libmachine: (enable-default-cni-516009)     <acpi/>
	I1009 23:47:37.183051  121077 main.go:141] libmachine: (enable-default-cni-516009)     <apic/>
	I1009 23:47:37.183066  121077 main.go:141] libmachine: (enable-default-cni-516009)     <pae/>
	I1009 23:47:37.183085  121077 main.go:141] libmachine: (enable-default-cni-516009)     
	I1009 23:47:37.183096  121077 main.go:141] libmachine: (enable-default-cni-516009)   </features>
	I1009 23:47:37.183112  121077 main.go:141] libmachine: (enable-default-cni-516009)   <cpu mode='host-passthrough'>
	I1009 23:47:37.183127  121077 main.go:141] libmachine: (enable-default-cni-516009)   
	I1009 23:47:37.183138  121077 main.go:141] libmachine: (enable-default-cni-516009)   </cpu>
	I1009 23:47:37.183148  121077 main.go:141] libmachine: (enable-default-cni-516009)   <os>
	I1009 23:47:37.183156  121077 main.go:141] libmachine: (enable-default-cni-516009)     <type>hvm</type>
	I1009 23:47:37.183186  121077 main.go:141] libmachine: (enable-default-cni-516009)     <boot dev='cdrom'/>
	I1009 23:47:37.183213  121077 main.go:141] libmachine: (enable-default-cni-516009)     <boot dev='hd'/>
	I1009 23:47:37.183229  121077 main.go:141] libmachine: (enable-default-cni-516009)     <bootmenu enable='no'/>
	I1009 23:47:37.183242  121077 main.go:141] libmachine: (enable-default-cni-516009)   </os>
	I1009 23:47:37.183256  121077 main.go:141] libmachine: (enable-default-cni-516009)   <devices>
	I1009 23:47:37.183271  121077 main.go:141] libmachine: (enable-default-cni-516009)     <disk type='file' device='cdrom'>
	I1009 23:47:37.183291  121077 main.go:141] libmachine: (enable-default-cni-516009)       <source file='/home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009/boot2docker.iso'/>
	I1009 23:47:37.183305  121077 main.go:141] libmachine: (enable-default-cni-516009)       <target dev='hdc' bus='scsi'/>
	I1009 23:47:37.183330  121077 main.go:141] libmachine: (enable-default-cni-516009)       <readonly/>
	I1009 23:47:37.183355  121077 main.go:141] libmachine: (enable-default-cni-516009)     </disk>
	I1009 23:47:37.183369  121077 main.go:141] libmachine: (enable-default-cni-516009)     <disk type='file' device='disk'>
	I1009 23:47:37.183391  121077 main.go:141] libmachine: (enable-default-cni-516009)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 23:47:37.183410  121077 main.go:141] libmachine: (enable-default-cni-516009)       <source file='/home/jenkins/minikube-integration/17375-78415/.minikube/machines/enable-default-cni-516009/enable-default-cni-516009.rawdisk'/>
	I1009 23:47:37.183431  121077 main.go:141] libmachine: (enable-default-cni-516009)       <target dev='hda' bus='virtio'/>
	I1009 23:47:37.183445  121077 main.go:141] libmachine: (enable-default-cni-516009)     </disk>
	I1009 23:47:37.183459  121077 main.go:141] libmachine: (enable-default-cni-516009)     <interface type='network'>
	I1009 23:47:37.183476  121077 main.go:141] libmachine: (enable-default-cni-516009)       <source network='mk-enable-default-cni-516009'/>
	I1009 23:47:37.183489  121077 main.go:141] libmachine: (enable-default-cni-516009)       <model type='virtio'/>
	I1009 23:47:37.183497  121077 main.go:141] libmachine: (enable-default-cni-516009)     </interface>
	I1009 23:47:37.183508  121077 main.go:141] libmachine: (enable-default-cni-516009)     <interface type='network'>
	I1009 23:47:37.183522  121077 main.go:141] libmachine: (enable-default-cni-516009)       <source network='default'/>
	I1009 23:47:37.183536  121077 main.go:141] libmachine: (enable-default-cni-516009)       <model type='virtio'/>
	I1009 23:47:37.183548  121077 main.go:141] libmachine: (enable-default-cni-516009)     </interface>
	I1009 23:47:37.183561  121077 main.go:141] libmachine: (enable-default-cni-516009)     <serial type='pty'>
	I1009 23:47:37.183580  121077 main.go:141] libmachine: (enable-default-cni-516009)       <target port='0'/>
	I1009 23:47:37.183591  121077 main.go:141] libmachine: (enable-default-cni-516009)     </serial>
	I1009 23:47:37.183602  121077 main.go:141] libmachine: (enable-default-cni-516009)     <console type='pty'>
	I1009 23:47:37.183621  121077 main.go:141] libmachine: (enable-default-cni-516009)       <target type='serial' port='0'/>
	I1009 23:47:37.183633  121077 main.go:141] libmachine: (enable-default-cni-516009)     </console>
	I1009 23:47:37.183647  121077 main.go:141] libmachine: (enable-default-cni-516009)     <rng model='virtio'>
	I1009 23:47:37.183660  121077 main.go:141] libmachine: (enable-default-cni-516009)       <backend model='random'>/dev/random</backend>
	I1009 23:47:37.183670  121077 main.go:141] libmachine: (enable-default-cni-516009)     </rng>
	I1009 23:47:37.183681  121077 main.go:141] libmachine: (enable-default-cni-516009)     
	I1009 23:47:37.183693  121077 main.go:141] libmachine: (enable-default-cni-516009)     
	I1009 23:47:37.183704  121077 main.go:141] libmachine: (enable-default-cni-516009)   </devices>
	I1009 23:47:37.183719  121077 main.go:141] libmachine: (enable-default-cni-516009) </domain>
	I1009 23:47:37.183734  121077 main.go:141] libmachine: (enable-default-cni-516009) 
	I1009 23:47:37.187927  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c1:e7:39 in network default
	I1009 23:47:37.188546  121077 main.go:141] libmachine: (enable-default-cni-516009) Ensuring networks are active...
	I1009 23:47:37.188583  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:37.189289  121077 main.go:141] libmachine: (enable-default-cni-516009) Ensuring network default is active
	I1009 23:47:37.189646  121077 main.go:141] libmachine: (enable-default-cni-516009) Ensuring network mk-enable-default-cni-516009 is active
	I1009 23:47:37.190325  121077 main.go:141] libmachine: (enable-default-cni-516009) Getting domain xml...
	I1009 23:47:37.191305  121077 main.go:141] libmachine: (enable-default-cni-516009) Creating domain...
	I1009 23:47:38.464328  121077 main.go:141] libmachine: (enable-default-cni-516009) Waiting to get IP...
	I1009 23:47:38.465339  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:38.465857  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:38.465897  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:38.465818  121174 retry.go:31] will retry after 242.096295ms: waiting for machine to come up
	I1009 23:47:38.709030  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:38.709691  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:38.709736  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:38.709630  121174 retry.go:31] will retry after 316.273691ms: waiting for machine to come up
	I1009 23:47:39.027084  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:39.027528  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:39.027560  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:39.027469  121174 retry.go:31] will retry after 336.77229ms: waiting for machine to come up
	I1009 23:47:39.365960  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:39.366469  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:39.366501  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:39.366368  121174 retry.go:31] will retry after 548.375585ms: waiting for machine to come up
	I1009 23:47:39.916765  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:39.917358  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:39.917382  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:39.917313  121174 retry.go:31] will retry after 596.893229ms: waiting for machine to come up
	I1009 23:47:40.516238  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:40.516738  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:40.516784  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:40.516665  121174 retry.go:31] will retry after 817.992027ms: waiting for machine to come up
	I1009 23:47:39.488376  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:41.995038  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:40.539069  120457 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1009 23:47:40.539123  120457 main.go:141] libmachine: (flannel-516009) Calling .GetIP
	I1009 23:47:40.542075  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:40.542519  120457 main.go:141] libmachine: (flannel-516009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:61:6f:27", ip: ""} in network mk-flannel-516009: {Iface:virbr1 ExpiryTime:2023-10-10 00:47:21 +0000 UTC Type:0 Mac:52:54:00:61:6f:27 Iaid: IPaddr:192.168.50.84 Prefix:24 Hostname:flannel-516009 Clientid:01:52:54:00:61:6f:27}
	I1009 23:47:40.542543  120457 main.go:141] libmachine: (flannel-516009) DBG | domain flannel-516009 has defined IP address 192.168.50.84 and MAC address 52:54:00:61:6f:27 in network mk-flannel-516009
	I1009 23:47:40.542753  120457 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 23:47:40.547222  120457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:47:40.560833  120457 localpath.go:92] copying /home/jenkins/minikube-integration/17375-78415/.minikube/client.crt -> /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/client.crt
	I1009 23:47:40.560998  120457 localpath.go:117] copying /home/jenkins/minikube-integration/17375-78415/.minikube/client.key -> /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/client.key
	I1009 23:47:40.561129  120457 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1009 23:47:40.561190  120457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:47:40.580821  120457 docker.go:689] Got preloaded images: 
	I1009 23:47:40.580853  120457 docker.go:695] registry.k8s.io/kube-apiserver:v1.28.2 wasn't preloaded
	I1009 23:47:40.580914  120457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 23:47:40.590111  120457 ssh_runner.go:195] Run: which lz4
	I1009 23:47:40.596951  120457 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1009 23:47:40.604678  120457 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 23:47:40.604716  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422207204 bytes)
	I1009 23:47:42.314057  120457 docker.go:653] Took 1.717144 seconds to copy over tarball
	I1009 23:47:42.314137  120457 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 23:47:41.337002  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:41.337378  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:41.337404  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:41.337331  121174 retry.go:31] will retry after 928.199072ms: waiting for machine to come up
	I1009 23:47:42.267404  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:42.267968  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:42.267999  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:42.267916  121174 retry.go:31] will retry after 1.152153388s: waiting for machine to come up
	I1009 23:47:43.421229  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:43.421739  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:43.421763  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:43.421685  121174 retry.go:31] will retry after 1.321339125s: waiting for machine to come up
	I1009 23:47:44.745440  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:44.746009  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:44.746043  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:44.745949  121174 retry.go:31] will retry after 1.992148688s: waiting for machine to come up
	I1009 23:47:44.491393  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:46.986596  116683 pod_ready.go:102] pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace has status "Ready":"False"
	I1009 23:47:45.181121  120457 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.866953781s)
	I1009 23:47:45.181167  120457 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 23:47:45.223310  120457 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1009 23:47:45.233956  120457 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1009 23:47:45.250970  120457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:47:45.365945  120457 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1009 23:47:47.253857  120457 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.887833939s)
	I1009 23:47:47.253978  120457 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1009 23:47:47.278980  120457 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1009 23:47:47.279009  120457 cache_images.go:84] Images are preloaded, skipping loading
	I1009 23:47:47.279072  120457 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1009 23:47:47.308546  120457 cni.go:84] Creating CNI manager for "flannel"
	I1009 23:47:47.308590  120457 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:47:47.308618  120457 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.84 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-516009 NodeName:flannel-516009 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:47:47.308833  120457 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "flannel-516009"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:47:47.308970  120457 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=flannel-516009 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:flannel-516009 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
	I1009 23:47:47.309042  120457 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:47:47.317762  120457 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:47:47.317839  120457 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:47:47.328827  120457 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1009 23:47:47.345723  120457 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:47:47.361533  120457 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1009 23:47:47.379524  120457 ssh_runner.go:195] Run: grep 192.168.50.84	control-plane.minikube.internal$ /etc/hosts
	I1009 23:47:47.383514  120457 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:47:47.397808  120457 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009 for IP: 192.168.50.84
	I1009 23:47:47.397847  120457 certs.go:190] acquiring lock for shared ca certs: {Name:mke2558e764208d6103dc9316e1963152570f27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:47.398031  120457 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key
	I1009 23:47:47.398098  120457 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key
	I1009 23:47:47.398222  120457 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/client.key
	I1009 23:47:47.398255  120457 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key.875754f7
	I1009 23:47:47.398275  120457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt.875754f7 with IP's: [192.168.50.84 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 23:47:47.546512  120457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt.875754f7 ...
	I1009 23:47:47.546547  120457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt.875754f7: {Name:mka4b3e6734126331a77b6abc677e8cdd1e7dc89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:47.546722  120457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key.875754f7 ...
	I1009 23:47:47.546737  120457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key.875754f7: {Name:mkc5c1963e82ad9c8e71920522af1c35db8e4baa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:47.546806  120457 certs.go:337] copying /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt.875754f7 -> /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt
	I1009 23:47:47.546865  120457 certs.go:341] copying /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key.875754f7 -> /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key
	I1009 23:47:47.546911  120457 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.key
	I1009 23:47:47.546925  120457 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.crt with IP's: []
	I1009 23:47:47.642068  120457 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.crt ...
	I1009 23:47:47.642096  120457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.crt: {Name:mk9fd56a43568f417ed0dae28fe38ad81db21808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:47.642284  120457 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.key ...
	I1009 23:47:47.642297  120457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.key: {Name:mk3485e192de152254b8f9585f124a8f9ac870fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:47:47.642589  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem (1338 bytes)
	W1009 23:47:47.642634  120457 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601_empty.pem, impossibly tiny 0 bytes
	I1009 23:47:47.642646  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:47:47.642669  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/ca.pem (1082 bytes)
	I1009 23:47:47.642691  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:47:47.642716  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/certs/home/jenkins/minikube-integration/17375-78415/.minikube/certs/key.pem (1679 bytes)
	I1009 23:47:47.642752  120457 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem (1708 bytes)
	I1009 23:47:47.643421  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:47:47.667900  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 23:47:47.693650  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:47:47.717816  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/flannel-516009/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:47:47.743754  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:47:47.771574  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 23:47:47.798326  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:47:47.825333  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 23:47:47.851679  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/certs/85601.pem --> /usr/share/ca-certificates/85601.pem (1338 bytes)
	I1009 23:47:47.881322  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/ssl/certs/856012.pem --> /usr/share/ca-certificates/856012.pem (1708 bytes)
	I1009 23:47:47.909610  120457 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-78415/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:47:47.936575  120457 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:47:47.953113  120457 ssh_runner.go:195] Run: openssl version
	I1009 23:47:47.959134  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:47:47.969397  120457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:47:47.974079  120457 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:47:47.974142  120457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:47:47.979877  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:47:47.992539  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/85601.pem && ln -fs /usr/share/ca-certificates/85601.pem /etc/ssl/certs/85601.pem"
	I1009 23:47:48.004641  120457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85601.pem
	I1009 23:47:48.010492  120457 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:00 /usr/share/ca-certificates/85601.pem
	I1009 23:47:48.010554  120457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85601.pem
	I1009 23:47:48.015949  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/85601.pem /etc/ssl/certs/51391683.0"
	I1009 23:47:48.025593  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/856012.pem && ln -fs /usr/share/ca-certificates/856012.pem /etc/ssl/certs/856012.pem"
	I1009 23:47:48.035719  120457 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/856012.pem
	I1009 23:47:48.040488  120457 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:00 /usr/share/ca-certificates/856012.pem
	I1009 23:47:48.040540  120457 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/856012.pem
	I1009 23:47:48.046414  120457 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/856012.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:47:48.056648  120457 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:47:48.061251  120457 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:47:48.061326  120457 kubeadm.go:404] StartCluster: {Name:flannel-516009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:flannel-516009 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.84 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:47:48.061473  120457 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1009 23:47:48.088311  120457 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:47:48.097415  120457 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:47:48.106530  120457 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:47:48.115090  120457 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:47:48.115143  120457 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 23:47:48.170453  120457 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1009 23:47:48.170924  120457 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 23:47:48.337396  120457 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 23:47:48.337575  120457 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 23:47:48.337693  120457 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 23:47:48.696237  120457 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:47:48.698559  120457 out.go:204]   - Generating certificates and keys ...
	I1009 23:47:48.698722  120457 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1009 23:47:48.698820  120457 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1009 23:47:48.811888  120457 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 23:47:49.453532  115309 system_pods.go:86] 8 kube-system pods found
	I1009 23:47:49.453561  115309 system_pods.go:89] "coredns-5644d7b6d9-w2qqz" [22ba58b1-12d6-49e9-a3b8-9394f4f1b97d] Running
	I1009 23:47:49.453570  115309 system_pods.go:89] "etcd-old-k8s-version-757458" [bf1df11e-b5da-4b61-9c9d-44abcbee1ca6] Running
	I1009 23:47:49.453577  115309 system_pods.go:89] "kube-apiserver-old-k8s-version-757458" [c37984e8-b73c-49e9-9364-d2bf776be636] Running
	I1009 23:47:49.453586  115309 system_pods.go:89] "kube-controller-manager-old-k8s-version-757458" [c161e732-9ee1-4103-b0a2-17ea6772e567] Running
	I1009 23:47:49.453593  115309 system_pods.go:89] "kube-proxy-8ngv2" [186fef3d-bb2d-4ce3-bce1-a59e12fc7df3] Running
	I1009 23:47:49.453600  115309 system_pods.go:89] "kube-scheduler-old-k8s-version-757458" [b696de97-2361-4e1f-a9ff-ac99779cfbda] Running
	I1009 23:47:49.453611  115309 system_pods.go:89] "metrics-server-74d5856cc6-zls5b" [f7adcc12-6ddd-42f7-8b3c-ecafb27627e5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 23:47:49.453619  115309 system_pods.go:89] "storage-provisioner" [9eff148f-8409-45b8-912a-fc1a9a1f00d7] Running
	I1009 23:47:49.453630  115309 system_pods.go:126] duration metric: took 1m10.310811322s to wait for k8s-apps to be running ...
	I1009 23:47:49.453645  115309 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:47:49.453699  115309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:47:49.473174  115309 system_svc.go:56] duration metric: took 19.517835ms WaitForService to wait for kubelet.
	I1009 23:47:49.473220  115309 kubeadm.go:581] duration metric: took 1m18.510493201s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:47:49.473247  115309 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:47:49.477554  115309 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1009 23:47:49.477577  115309 node_conditions.go:123] node cpu capacity is 2
	I1009 23:47:49.477591  115309 node_conditions.go:105] duration metric: took 4.337713ms to run NodePressure ...
	I1009 23:47:49.477605  115309 start.go:228] waiting for startup goroutines ...
	I1009 23:47:49.477614  115309 start.go:233] waiting for cluster config update ...
	I1009 23:47:49.477627  115309 start.go:242] writing updated cluster config ...
	I1009 23:47:49.477990  115309 ssh_runner.go:195] Run: rm -f paused
	I1009 23:47:49.532504  115309 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1009 23:47:49.534579  115309 out.go:177] 
	W1009 23:47:49.536142  115309 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1009 23:47:49.537829  115309 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1009 23:47:49.539936  115309 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-757458" cluster and "default" namespace by default
	I1009 23:47:48.991160  120457 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1009 23:47:49.277707  120457 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1009 23:47:49.359322  120457 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1009 23:47:49.534251  120457 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1009 23:47:49.534993  120457 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [flannel-516009 localhost] and IPs [192.168.50.84 127.0.0.1 ::1]
	I1009 23:47:49.599126  120457 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1009 23:47:49.599468  120457 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [flannel-516009 localhost] and IPs [192.168.50.84 127.0.0.1 ::1]
	I1009 23:47:49.849719  120457 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 23:47:50.067774  120457 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 23:47:50.284445  120457 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1009 23:47:50.284835  120457 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:47:50.544120  120457 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:47:50.613647  120457 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:47:50.689755  120457 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:47:50.803680  120457 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:47:50.804553  120457 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:47:50.807103  120457 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:47:46.740373  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:46.740908  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:46.740944  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:46.740852  121174 retry.go:31] will retry after 2.454726769s: waiting for machine to come up
	I1009 23:47:49.197245  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:49.197798  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:49.197824  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:49.197753  121174 retry.go:31] will retry after 2.769743631s: waiting for machine to come up
	I1009 23:47:48.681145  116683 pod_ready.go:81] duration metric: took 4m0.000804359s waiting for pod "metrics-server-57f55c9bc5-f9vxx" in "kube-system" namespace to be "Ready" ...
	E1009 23:47:48.681189  116683 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 23:47:48.681208  116683 pod_ready.go:38] duration metric: took 4m12.427162875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:47:48.681246  116683 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:47:48.681366  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1009 23:47:48.714624  116683 logs.go:284] 2 containers: [fd062d5a9e79 0ce97673677f]
	I1009 23:47:48.714709  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1009 23:47:48.744533  116683 logs.go:284] 2 containers: [a00f949c8f46 2f4cddf6b709]
	I1009 23:47:48.744620  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1009 23:47:48.766301  116683 logs.go:284] 2 containers: [14c40346a01e cbacac7ff631]
	I1009 23:47:48.766400  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1009 23:47:48.796756  116683 logs.go:284] 2 containers: [10d9f5ee4bbe b4f7bd036139]
	I1009 23:47:48.796832  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1009 23:47:48.840078  116683 logs.go:284] 2 containers: [42937536fbff b0a25f130b5b]
	I1009 23:47:48.840186  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1009 23:47:48.864764  116683 logs.go:284] 2 containers: [203267275b20 0b1f7cf62bea]
	I1009 23:47:48.864862  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1009 23:47:48.885855  116683 logs.go:284] 0 containers: []
	W1009 23:47:48.885886  116683 logs.go:286] No container was found matching "kindnet"
	I1009 23:47:48.885944  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1009 23:47:48.916904  116683 logs.go:284] 1 containers: [85daa3633c93]
	I1009 23:47:48.916982  116683 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1009 23:47:48.944174  116683 logs.go:284] 2 containers: [cb3c70676a0f d5cef7d57ba3]
	I1009 23:47:48.944211  116683 logs.go:123] Gathering logs for kube-apiserver [fd062d5a9e79] ...
	I1009 23:47:48.944226  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd062d5a9e79"
	I1009 23:47:48.987276  116683 logs.go:123] Gathering logs for etcd [2f4cddf6b709] ...
	I1009 23:47:48.987312  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4cddf6b709"
	I1009 23:47:49.024233  116683 logs.go:123] Gathering logs for coredns [14c40346a01e] ...
	I1009 23:47:49.024261  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14c40346a01e"
	I1009 23:47:49.049289  116683 logs.go:123] Gathering logs for storage-provisioner [d5cef7d57ba3] ...
	I1009 23:47:49.049323  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5cef7d57ba3"
	I1009 23:47:49.080991  116683 logs.go:123] Gathering logs for kubelet ...
	I1009 23:47:49.081017  116683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 23:47:49.141800  116683 logs.go:138] Found kubelet problem: Oct 09 23:43:41 default-k8s-diff-port-468042 kubelet[1292]: W1009 23:43:41.361233    1292 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-468042" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-468042' and this object
	W1009 23:47:49.142014  116683 logs.go:138] Found kubelet problem: Oct 09 23:43:41 default-k8s-diff-port-468042 kubelet[1292]: E1009 23:43:41.361365    1292 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-468042" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-468042' and this object
	I1009 23:47:49.164832  116683 logs.go:123] Gathering logs for kube-controller-manager [0b1f7cf62bea] ...
	I1009 23:47:49.164863  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b1f7cf62bea"
	I1009 23:47:49.210380  116683 logs.go:123] Gathering logs for dmesg ...
	I1009 23:47:49.210410  116683 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 23:47:49.225899  116683 logs.go:123] Gathering logs for kube-apiserver [0ce97673677f] ...
	I1009 23:47:49.225929  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ce97673677f"
	I1009 23:47:49.271244  116683 logs.go:123] Gathering logs for coredns [cbacac7ff631] ...
	I1009 23:47:49.271275  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbacac7ff631"
	I1009 23:47:49.305431  116683 logs.go:123] Gathering logs for kube-proxy [b0a25f130b5b] ...
	I1009 23:47:49.305464  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0a25f130b5b"
	I1009 23:47:49.347412  116683 logs.go:123] Gathering logs for kubernetes-dashboard [85daa3633c93] ...
	I1009 23:47:49.347458  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85daa3633c93"
	I1009 23:47:49.373962  116683 logs.go:123] Gathering logs for Docker ...
	I1009 23:47:49.373993  116683 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1009 23:47:49.445492  116683 logs.go:123] Gathering logs for describe nodes ...
	I1009 23:47:49.445524  116683 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 23:47:49.713960  116683 logs.go:123] Gathering logs for kube-scheduler [10d9f5ee4bbe] ...
	I1009 23:47:49.714043  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10d9f5ee4bbe"
	I1009 23:47:49.746744  116683 logs.go:123] Gathering logs for kube-scheduler [b4f7bd036139] ...
	I1009 23:47:49.746782  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f7bd036139"
	I1009 23:47:49.784639  116683 logs.go:123] Gathering logs for kube-proxy [42937536fbff] ...
	I1009 23:47:49.784673  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42937536fbff"
	I1009 23:47:49.815811  116683 logs.go:123] Gathering logs for kube-controller-manager [203267275b20] ...
	I1009 23:47:49.815856  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 203267275b20"
	I1009 23:47:49.871956  116683 logs.go:123] Gathering logs for storage-provisioner [cb3c70676a0f] ...
	I1009 23:47:49.871989  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb3c70676a0f"
	I1009 23:47:49.903550  116683 logs.go:123] Gathering logs for container status ...
	I1009 23:47:49.903584  116683 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 23:47:50.001959  116683 logs.go:123] Gathering logs for etcd [a00f949c8f46] ...
	I1009 23:47:50.001993  116683 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a00f949c8f46"
	I1009 23:47:50.044286  116683 out.go:309] Setting ErrFile to fd 2...
	I1009 23:47:50.044322  116683 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1009 23:47:50.044389  116683 out.go:239] X Problems detected in kubelet:
	W1009 23:47:50.044406  116683 out.go:239]   Oct 09 23:43:41 default-k8s-diff-port-468042 kubelet[1292]: W1009 23:43:41.361233    1292 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-468042" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-468042' and this object
	W1009 23:47:50.044417  116683 out.go:239]   Oct 09 23:43:41 default-k8s-diff-port-468042 kubelet[1292]: E1009 23:43:41.361365    1292 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-468042" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-468042' and this object
	I1009 23:47:50.044427  116683 out.go:309] Setting ErrFile to fd 2...
	I1009 23:47:50.044435  116683 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:47:50.809132  120457 out.go:204]   - Booting up control plane ...
	I1009 23:47:50.809285  120457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:47:50.809389  120457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:47:50.809937  120457 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:47:50.825884  120457 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:47:50.827814  120457 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:47:50.827870  120457 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1009 23:47:50.949956  120457 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 23:47:51.968956  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | domain enable-default-cni-516009 has defined MAC address 52:54:00:c0:47:11 in network mk-enable-default-cni-516009
	I1009 23:47:51.969514  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | unable to find current IP address of domain enable-default-cni-516009 in network mk-enable-default-cni-516009
	I1009 23:47:51.969560  121077 main.go:141] libmachine: (enable-default-cni-516009) DBG | I1009 23:47:51.969462  121174 retry.go:31] will retry after 4.439744417s: waiting for machine to come up
	I1009 23:47:58.452748  120457 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505644 seconds
	I1009 23:47:58.452879  120457 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 23:47:58.470208  120457 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 23:47:58.995269  120457 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 23:47:58.995508  120457 kubeadm.go:322] [mark-control-plane] Marking the node flannel-516009 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 23:47:59.513385  120457 kubeadm.go:322] [bootstrap-token] Using token: gknj2r.wmvix1jgr3esh90e
	I1009 23:47:59.514912  120457 out.go:204]   - Configuring RBAC rules ...
	I1009 23:47:59.515023  120457 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 23:47:59.519703  120457 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 23:47:59.527530  120457 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 23:47:59.531329  120457 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 23:47:59.537306  120457 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 23:47:59.551694  120457 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 23:47:59.582994  120457 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 23:47:59.869850  120457 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1009 23:47:59.925016  120457 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1009 23:47:59.928008  120457 kubeadm.go:322] 
	I1009 23:47:59.928119  120457 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1009 23:47:59.928141  120457 kubeadm.go:322] 
	I1009 23:47:59.928267  120457 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1009 23:47:59.928301  120457 kubeadm.go:322] 
	I1009 23:47:59.928351  120457 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1009 23:47:59.928471  120457 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 23:47:59.928563  120457 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 23:47:59.928576  120457 kubeadm.go:322] 
	I1009 23:47:59.928652  120457 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1009 23:47:59.928668  120457 kubeadm.go:322] 
	I1009 23:47:59.928740  120457 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 23:47:59.928753  120457 kubeadm.go:322] 
	I1009 23:47:59.928818  120457 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1009 23:47:59.928917  120457 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 23:47:59.929020  120457 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 23:47:59.929030  120457 kubeadm.go:322] 
	I1009 23:47:59.929125  120457 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 23:47:59.929226  120457 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1009 23:47:59.929238  120457 kubeadm.go:322] 
	I1009 23:47:59.929371  120457 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gknj2r.wmvix1jgr3esh90e \
	I1009 23:47:59.929508  120457 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:475eb7798652406d7ab8427129d232158bf635a691d874a17d1f0140d854e1f5 \
	I1009 23:47:59.929537  120457 kubeadm.go:322] 	--control-plane 
	I1009 23:47:59.929547  120457 kubeadm.go:322] 
	I1009 23:47:59.929648  120457 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1009 23:47:59.929659  120457 kubeadm.go:322] 
	I1009 23:47:59.929743  120457 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gknj2r.wmvix1jgr3esh90e \
	I1009 23:47:59.929921  120457 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:475eb7798652406d7ab8427129d232158bf635a691d874a17d1f0140d854e1f5 
	I1009 23:47:59.932786  120457 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:47:59.932817  120457 cni.go:84] Creating CNI manager for "flannel"
	I1009 23:47:59.935564  120457 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-09 23:40:21 UTC, ends at Mon 2023-10-09 23:48:01 UTC. --
	Oct 09 23:46:51 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:46:51.074874238Z" level=info msg="ignoring event" container=42c7f79a8ac3d5824b0567ef0621ae6c4b9f798c918ae8ee89094fe7ed74a4d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 09 23:46:51 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:46:51.075273944Z" level=warning msg="cleaning up after shim disconnected" id=42c7f79a8ac3d5824b0567ef0621ae6c4b9f798c918ae8ee89094fe7ed74a4d0 namespace=moby
	Oct 09 23:46:51 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:46:51.075292261Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 09 23:47:02 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:02.821708165Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:02 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:02.821772516Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:02 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:02.827773160Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:10 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:10.893800794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:47:10 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:10.894138193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:47:10 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:10.894171992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:47:10 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:10.894249506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:47:11 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:11.398146913Z" level=info msg="shim disconnected" id=f75d69dfb8e9971410c4b71bce1fe9b11dbd90690a770dfbd9d915a0557a32cd namespace=moby
	Oct 09 23:47:11 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:11.398687821Z" level=warning msg="cleaning up after shim disconnected" id=f75d69dfb8e9971410c4b71bce1fe9b11dbd90690a770dfbd9d915a0557a32cd namespace=moby
	Oct 09 23:47:11 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:11.398714594Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 09 23:47:11 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:11.399713813Z" level=info msg="ignoring event" container=f75d69dfb8e9971410c4b71bce1fe9b11dbd90690a770dfbd9d915a0557a32cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 09 23:47:24 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:24.841152532Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:24 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:24.841214746Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:24 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:24.843742653Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:32 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:32.871964874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 09 23:47:32 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:32.874247046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:47:32 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:32.874586898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 09 23:47:32 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:32.874739460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 09 23:47:33 old-k8s-version-757458 dockerd[1084]: time="2023-10-09T23:47:33.326267743Z" level=info msg="ignoring event" container=70f35bc4cadd99a97c2eb78bb0237c8ec866995603d189e2dc643da49784bf87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 09 23:47:33 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:33.327589467Z" level=info msg="shim disconnected" id=70f35bc4cadd99a97c2eb78bb0237c8ec866995603d189e2dc643da49784bf87 namespace=moby
	Oct 09 23:47:33 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:33.327755190Z" level=warning msg="cleaning up after shim disconnected" id=70f35bc4cadd99a97c2eb78bb0237c8ec866995603d189e2dc643da49784bf87 namespace=moby
	Oct 09 23:47:33 old-k8s-version-757458 dockerd[1090]: time="2023-10-09T23:47:33.327887975Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	70f35bc4cadd   a90209bb39e3             "nginx -g 'daemon of…"   29 seconds ago       Exited (1) 27 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard_a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb_3
	1918169defb7   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-wshxl_kubernetes-dashboard_958d4255-4cd7-4a6d-9b0c-375d499c3a87_0
	1aed553bf431   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-zls5b_kube-system_f7adcc12-6ddd-42f7-8b3c-ecafb27627e5_0
	884ea42b68ae   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-wshxl_kubernetes-dashboard_958d4255-4cd7-4a6d-9b0c-375d499c3a87_0
	682bb24fb471   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard_a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb_0
	19104569d718   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_9eff148f-8409-45b8-912a-fc1a9a1f00d7_0
	1573fc98b791   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_9eff148f-8409-45b8-912a-fc1a9a1f00d7_0
	7f469dc3e6b9   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-w2qqz_kube-system_22ba58b1-12d6-49e9-a3b8-9394f4f1b97d_0
	b04aae192b59   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-8ngv2_kube-system_186fef3d-bb2d-4ce3-bce1-a59e12fc7df3_0
	e3f65560ecaa   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-w2qqz_kube-system_22ba58b1-12d6-49e9-a3b8-9394f4f1b97d_0
	f472d4fc9208   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-8ngv2_kube-system_186fef3d-bb2d-4ce3-bce1-a59e12fc7df3_0
	72d27f8c5574   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-757458_kube-system_511b0de9f38be57286dbee2ac88463ba_0
	4a39a0a94c76   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-757458_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	3ba0ced29442   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-757458_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	e77116a9983f   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-757458_kube-system_ff7fcd59d479a48f11f669cd621ecc99_0
	a4e2ac5e6a56   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-757458_kube-system_ff7fcd59d479a48f11f669cd621ecc99_0
	63e1ca25a9fe   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-757458_kube-system_511b0de9f38be57286dbee2ac88463ba_0
	5e901e895284   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-757458_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	c221408d09d7   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-757458_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	time="2023-10-09T23:48:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [7f469dc3e6b9] <==
	* .:53
	2023-10-09T23:46:33.027Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-09T23:46:33.028Z [INFO] CoreDNS-1.6.2
	2023-10-09T23:46:33.028Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-09T23:47:04.447Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2023-10-09T23:47:04.483Z [INFO] 127.0.0.1:49951 - 46970 "HINFO IN 6560776849549130083.7125448762128776045. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036344332s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-757458
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-757458
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90
	                    minikube.k8s.io/name=old-k8s-version-757458
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_09T23_46_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:46:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:47:11 +0000   Mon, 09 Oct 2023 23:46:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:47:11 +0000   Mon, 09 Oct 2023 23:46:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:47:11 +0000   Mon, 09 Oct 2023 23:46:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:47:11 +0000   Mon, 09 Oct 2023 23:46:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.116
	  Hostname:    old-k8s-version-757458
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0d2d4024b5284994bdb4cc88a71ee816
	 System UUID:                0d2d4024-b528-4994-bdb4-cc88a71ee816
	 Boot ID:                    0af09c0e-518e-478f-9a82-a604b5d7b125
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-w2qqz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                etcd-old-k8s-version-757458                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                kube-apiserver-old-k8s-version-757458             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                kube-controller-manager-old-k8s-version-757458    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                kube-proxy-8ngv2                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                kube-scheduler-old-k8s-version-757458             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                metrics-server-74d5856cc6-zls5b                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         87s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-9wdvj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-wshxl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  117s (x8 over 118s)  kubelet, old-k8s-version-757458     Node old-k8s-version-757458 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet, old-k8s-version-757458     Node old-k8s-version-757458 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x7 over 118s)  kubelet, old-k8s-version-757458     Node old-k8s-version-757458 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                  kube-proxy, old-k8s-version-757458  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 9 23:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.083629] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.643685] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.493412] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160357] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.464274] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.683452] systemd-fstab-generator[512]: Ignoring "noauto" for root device
	[  +0.134610] systemd-fstab-generator[523]: Ignoring "noauto" for root device
	[  +1.420474] systemd-fstab-generator[792]: Ignoring "noauto" for root device
	[  +0.396146] systemd-fstab-generator[830]: Ignoring "noauto" for root device
	[  +0.126039] systemd-fstab-generator[841]: Ignoring "noauto" for root device
	[  +0.211489] systemd-fstab-generator[854]: Ignoring "noauto" for root device
	[  +6.888883] systemd-fstab-generator[1075]: Ignoring "noauto" for root device
	[  +1.414870] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.015438] systemd-fstab-generator[1493]: Ignoring "noauto" for root device
	[  +0.469881] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.232114] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 9 23:41] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 9 23:46] systemd-fstab-generator[5476]: Ignoring "noauto" for root device
	[ +34.315396] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [72d27f8c5574] <==
	* 2023-10-09 23:46:06.634077 W | auth: simple token is not cryptographically signed
	2023-10-09 23:46:06.638730 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-09 23:46:06.642133 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-09 23:46:06.642604 I | embed: listening for metrics on http://192.168.61.116:2381
	2023-10-09 23:46:06.643288 I | etcdserver: 3ff2c8dabfa88909 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-09 23:46:06.644623 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-09 23:46:06.645223 I | etcdserver/membership: added member 3ff2c8dabfa88909 [https://192.168.61.116:2380] to cluster d8013dd48c9fa2cd
	2023-10-09 23:46:07.107510 I | raft: 3ff2c8dabfa88909 is starting a new election at term 1
	2023-10-09 23:46:07.107579 I | raft: 3ff2c8dabfa88909 became candidate at term 2
	2023-10-09 23:46:07.107604 I | raft: 3ff2c8dabfa88909 received MsgVoteResp from 3ff2c8dabfa88909 at term 2
	2023-10-09 23:46:07.107617 I | raft: 3ff2c8dabfa88909 became leader at term 2
	2023-10-09 23:46:07.107624 I | raft: raft.node: 3ff2c8dabfa88909 elected leader 3ff2c8dabfa88909 at term 2
	2023-10-09 23:46:07.108627 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-09 23:46:07.109736 I | etcdserver: published {Name:old-k8s-version-757458 ClientURLs:[https://192.168.61.116:2379]} to cluster d8013dd48c9fa2cd
	2023-10-09 23:46:07.110458 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-09 23:46:07.110973 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-09 23:46:07.111448 I | embed: ready to serve client requests
	2023-10-09 23:46:07.118004 I | embed: serving client requests on 192.168.61.116:2379
	2023-10-09 23:46:07.118496 I | embed: ready to serve client requests
	2023-10-09 23:46:07.120573 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-09 23:46:27.906299 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (139.375843ms) to execute
	2023-10-09 23:46:27.906758 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "range_response_count:1 size:207" took too long (227.199336ms) to execute
	2023-10-09 23:46:34.370944 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-w2qqz\" " with result "range_response_count:1 size:1890" took too long (403.901941ms) to execute
	2023-10-09 23:46:34.383045 W | etcdserver: request "header:<ID:9874576588172464842 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" mod_revision:360 > success:<request_put:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" value_size:1335 >> failure:<request_range:<key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" > >>" with result "size:16" took too long (153.799784ms) to execute
	2023-10-09 23:46:34.384195 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (288.10109ms) to execute
	
	* 
	* ==> kernel <==
	*  23:48:01 up 7 min,  0 users,  load average: 0.85, 0.68, 0.34
	Linux old-k8s-version-757458 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e77116a9983f] <==
	* I1009 23:46:12.090305       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1009 23:46:12.097203       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1009 23:46:12.097496       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1009 23:46:13.865679       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 23:46:14.145710       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1009 23:46:14.494092       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.61.116]
	I1009 23:46:14.495017       1 controller.go:606] quota admission added evaluator for: endpoints
	I1009 23:46:14.520622       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 23:46:15.404525       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1009 23:46:15.845701       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1009 23:46:16.144426       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1009 23:46:31.057871       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1009 23:46:31.098181       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1009 23:46:31.166648       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	E1009 23:46:33.607919       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	I1009 23:46:35.508624       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1009 23:46:35.508703       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 23:46:35.508764       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1009 23:46:35.508770       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 23:47:35.509318       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1009 23:47:35.509498       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 23:47:35.510396       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1009 23:47:35.510598       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3ba0ced29442] <==
	* I1009 23:46:34.776004       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.799747       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.800514       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.804308       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.804478       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.827064       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.827494       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.827495       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.828218       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.862843       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.863722       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.887553       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.887829       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.902929       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.903204       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.914678       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.914747       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1009 23:46:34.922231       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:34.922473       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1009 23:46:35.005805       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2721fe89-7531-4176-80b4-ec6bd2ee731f", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-9wdvj
	I1009 23:46:35.010008       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"80889955-c21d-46f0-a071-18ef5166bca9", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-wshxl
	E1009 23:47:01.426272       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1009 23:47:03.524744       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1009 23:47:31.678691       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1009 23:47:35.526798       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b04aae192b59] <==
	* W1009 23:46:32.659614       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1009 23:46:32.670434       1 node.go:135] Successfully retrieved node IP: 192.168.61.116
	I1009 23:46:32.670477       1 server_others.go:149] Using iptables Proxier.
	I1009 23:46:32.671086       1 server.go:529] Version: v1.16.0
	I1009 23:46:32.677454       1 config.go:313] Starting service config controller
	I1009 23:46:32.677494       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1009 23:46:32.677849       1 config.go:131] Starting endpoints config controller
	I1009 23:46:32.677923       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1009 23:46:32.778158       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1009 23:46:32.792300       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4a39a0a94c76] <==
	* W1009 23:46:11.210480       1 authentication.go:79] Authentication is disabled
	I1009 23:46:11.210682       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1009 23:46:11.211544       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1009 23:46:11.275986       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 23:46:11.276584       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 23:46:11.277054       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 23:46:11.277088       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:46:11.277527       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 23:46:11.278278       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 23:46:11.279655       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 23:46:11.279849       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 23:46:11.279947       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 23:46:11.280100       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 23:46:11.280423       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 23:46:12.278035       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 23:46:12.279064       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 23:46:12.282095       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 23:46:12.283252       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:46:12.284585       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 23:46:12.286261       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 23:46:12.290673       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 23:46:12.292651       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 23:46:12.293470       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 23:46:12.299150       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 23:46:12.303139       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-09 23:40:21 UTC, ends at Mon 2023-10-09 23:48:01 UTC. --
	Oct 09 23:46:51 old-k8s-version-757458 kubelet[5494]: E1009 23:46:51.536477    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:46:52 old-k8s-version-757458 kubelet[5494]: W1009 23:46:52.544574    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:46:52 old-k8s-version-757458 kubelet[5494]: E1009 23:46:52.549564    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:46:59 old-k8s-version-757458 kubelet[5494]: E1009 23:46:59.677789    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:02 old-k8s-version-757458 kubelet[5494]: E1009 23:47:02.828551    5494 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:02 old-k8s-version-757458 kubelet[5494]: E1009 23:47:02.828619    5494 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:02 old-k8s-version-757458 kubelet[5494]: E1009 23:47:02.828671    5494 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:02 old-k8s-version-757458 kubelet[5494]: E1009 23:47:02.828712    5494 pod_workers.go:191] Error syncing pod f7adcc12-6ddd-42f7-8b3c-ecafb27627e5 ("metrics-server-74d5856cc6-zls5b_kube-system(f7adcc12-6ddd-42f7-8b3c-ecafb27627e5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:11 old-k8s-version-757458 kubelet[5494]: W1009 23:47:11.718127    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:47:11 old-k8s-version-757458 kubelet[5494]: E1009 23:47:11.728018    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:12 old-k8s-version-757458 kubelet[5494]: W1009 23:47:12.738606    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:47:13 old-k8s-version-757458 kubelet[5494]: E1009 23:47:13.799860    5494 pod_workers.go:191] Error syncing pod f7adcc12-6ddd-42f7-8b3c-ecafb27627e5 ("metrics-server-74d5856cc6-zls5b_kube-system(f7adcc12-6ddd-42f7-8b3c-ecafb27627e5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 23:47:19 old-k8s-version-757458 kubelet[5494]: E1009 23:47:19.678158    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:24 old-k8s-version-757458 kubelet[5494]: E1009 23:47:24.844224    5494 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:24 old-k8s-version-757458 kubelet[5494]: E1009 23:47:24.844298    5494 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:24 old-k8s-version-757458 kubelet[5494]: E1009 23:47:24.844412    5494 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 09 23:47:24 old-k8s-version-757458 kubelet[5494]: E1009 23:47:24.844443    5494 pod_workers.go:191] Error syncing pod f7adcc12-6ddd-42f7-8b3c-ecafb27627e5 ("metrics-server-74d5856cc6-zls5b_kube-system(f7adcc12-6ddd-42f7-8b3c-ecafb27627e5)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 09 23:47:32 old-k8s-version-757458 kubelet[5494]: W1009 23:47:32.913164    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:47:34 old-k8s-version-757458 kubelet[5494]: W1009 23:47:34.325580    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:47:34 old-k8s-version-757458 kubelet[5494]: E1009 23:47:34.333624    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:35 old-k8s-version-757458 kubelet[5494]: W1009 23:47:35.350059    5494 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-9wdvj through plugin: invalid network status for
	Oct 09 23:47:37 old-k8s-version-757458 kubelet[5494]: E1009 23:47:37.796898    5494 pod_workers.go:191] Error syncing pod f7adcc12-6ddd-42f7-8b3c-ecafb27627e5 ("metrics-server-74d5856cc6-zls5b_kube-system(f7adcc12-6ddd-42f7-8b3c-ecafb27627e5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 09 23:47:39 old-k8s-version-757458 kubelet[5494]: E1009 23:47:39.677838    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:52 old-k8s-version-757458 kubelet[5494]: E1009 23:47:52.794934    5494 pod_workers.go:191] Error syncing pod a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb ("dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-9wdvj_kubernetes-dashboard(a6c215e4-58a0-42c0-9bc6-d1d0118ab2bb)"
	Oct 09 23:47:52 old-k8s-version-757458 kubelet[5494]: E1009 23:47:52.796894    5494 pod_workers.go:191] Error syncing pod f7adcc12-6ddd-42f7-8b3c-ecafb27627e5 ("metrics-server-74d5856cc6-zls5b_kube-system(f7adcc12-6ddd-42f7-8b3c-ecafb27627e5)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [1918169defb7] <==
	* 2023/10/09 23:46:44 Using namespace: kubernetes-dashboard
	2023/10/09 23:46:44 Using in-cluster config to connect to apiserver
	2023/10/09 23:46:44 Using secret token for csrf signing
	2023/10/09 23:46:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/09 23:46:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/09 23:46:44 Successful initial request to the apiserver, version: v1.16.0
	2023/10/09 23:46:44 Generating JWE encryption key
	2023/10/09 23:46:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/09 23:46:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/09 23:46:44 Initializing JWE encryption key from synchronized object
	2023/10/09 23:46:44 Creating in-cluster Sidecar client
	2023/10/09 23:46:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/09 23:46:44 Serving insecurely on HTTP port: 9090
	2023/10/09 23:47:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/09 23:47:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/09 23:46:44 Starting overwatch
	
	* 
	* ==> storage-provisioner [19104569d718] <==
	* I1009 23:46:34.934999       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 23:46:34.965632       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 23:46:34.965810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 23:46:35.011966       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 23:46:35.012874       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-757458_92d506bd-928c-4dd9-86e2-45f983fdc054!
	I1009 23:46:35.020224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09fd7b77-8be8-444b-8f29-b7909aaaf5a0", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-757458_92d506bd-928c-4dd9-86e2-45f983fdc054 became leader
	I1009 23:46:35.113833       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-757458_92d506bd-928c-4dd9-86e2-45f983fdc054!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757458 -n old-k8s-version-757458
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-757458 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-zls5b
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-757458 describe pod metrics-server-74d5856cc6-zls5b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-757458 describe pod metrics-server-74d5856cc6-zls5b: exit status 1 (74.687162ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-zls5b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-757458 describe pod metrics-server-74d5856cc6-zls5b: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.31s)

                                                
                                    

Test pass (287/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.58
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 5.06
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.58
20 TestOffline 100.73
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 157.15
27 TestAddons/parallel/Registry 15.84
28 TestAddons/parallel/Ingress 27.27
29 TestAddons/parallel/InspektorGadget 10.84
30 TestAddons/parallel/MetricsServer 5.77
31 TestAddons/parallel/HelmTiller 11.9
33 TestAddons/parallel/CSI 97.77
34 TestAddons/parallel/Headlamp 14.92
35 TestAddons/parallel/CloudSpanner 5.53
36 TestAddons/parallel/LocalPath 11.96
37 TestAddons/parallel/NvidiaDevicePlugin 5.64
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/StoppedEnableDisable 13.42
42 TestCertOptions 118.12
43 TestCertExpiration 309.03
44 TestDockerFlags 87.36
45 TestForceSystemdFlag 63.96
46 TestForceSystemdEnv 89.54
48 TestKVMDriverInstallOrUpdate 3.13
52 TestErrorSpam/setup 50.37
53 TestErrorSpam/start 0.38
54 TestErrorSpam/status 0.8
55 TestErrorSpam/pause 1.25
56 TestErrorSpam/unpause 1.35
57 TestErrorSpam/stop 4.26
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 65.37
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 37.41
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.39
69 TestFunctional/serial/CacheCmd/cache/add_local 1.33
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 38.33
78 TestFunctional/serial/ComponentHealth 0.08
79 TestFunctional/serial/LogsCmd 1.13
80 TestFunctional/serial/LogsFileCmd 1.08
81 TestFunctional/serial/InvalidService 4.38
83 TestFunctional/parallel/ConfigCmd 0.47
84 TestFunctional/parallel/DashboardCmd 20.96
85 TestFunctional/parallel/DryRun 0.33
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 1.13
91 TestFunctional/parallel/ServiceCmdConnect 9.51
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 54.05
95 TestFunctional/parallel/SSHCmd 0.5
96 TestFunctional/parallel/CpCmd 1.06
97 TestFunctional/parallel/MySQL 43.01
98 TestFunctional/parallel/FileSync 0.23
99 TestFunctional/parallel/CertSync 1.58
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
107 TestFunctional/parallel/License 0.18
108 TestFunctional/parallel/ServiceCmd/DeployApp 13.27
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
110 TestFunctional/parallel/ProfileCmd/profile_list 0.35
111 TestFunctional/parallel/MountCmd/any-port 10.84
112 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
113 TestFunctional/parallel/MountCmd/specific-port 1.63
114 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
115 TestFunctional/parallel/ServiceCmd/List 0.45
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
118 TestFunctional/parallel/ServiceCmd/Format 0.39
123 TestFunctional/parallel/ServiceCmd/URL 0.32
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.94
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
135 TestFunctional/parallel/ImageCommands/ImageBuild 3.7
136 TestFunctional/parallel/ImageCommands/Setup 1.52
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.38
138 TestFunctional/parallel/DockerEnv/bash 0.87
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.7
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.34
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.15
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.91
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.48
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.01
151 TestGvisorAddon 298.85
154 TestImageBuild/serial/Setup 50.01
155 TestImageBuild/serial/NormalBuild 1.62
156 TestImageBuild/serial/BuildWithBuildArg 1.23
157 TestImageBuild/serial/BuildWithDockerIgnore 0.38
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.3
161 TestIngressAddonLegacy/StartLegacyK8sCluster 73.67
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.47
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.02
168 TestJSONOutput/start/Command 102.78
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.59
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.54
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 8.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 107.13
200 TestMountStart/serial/StartWithMountFirst 31.32
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 28.24
203 TestMountStart/serial/VerifyMountSecond 0.4
204 TestMountStart/serial/DeleteFirst 0.91
205 TestMountStart/serial/VerifyMountPostDelete 0.41
206 TestMountStart/serial/Stop 11.27
207 TestMountStart/serial/RestartStopped 23.48
208 TestMountStart/serial/VerifyMountPostStop 0.41
211 TestMultiNode/serial/FreshStart2Nodes 130.12
212 TestMultiNode/serial/DeployApp2Nodes 5.07
213 TestMultiNode/serial/PingHostFrom2Pods 0.95
214 TestMultiNode/serial/AddNode 46.2
215 TestMultiNode/serial/ProfileList 0.21
216 TestMultiNode/serial/CopyFile 7.66
217 TestMultiNode/serial/StopNode 3.99
218 TestMultiNode/serial/StartAfterStop 32.2
219 TestMultiNode/serial/RestartKeepsNodes 188.71
220 TestMultiNode/serial/DeleteNode 1.77
221 TestMultiNode/serial/StopMultiNode 25.6
223 TestMultiNode/serial/ValidateNameConflict 54.31
228 TestPreload 170.59
230 TestScheduledStopUnix 122.22
231 TestSkaffold 140.68
236 TestKubernetesUpgrade 228.12
246 TestPause/serial/Start 134.72
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
249 TestNoKubernetes/serial/StartWithK8s 127.33
261 TestPause/serial/SecondStartNoReconfiguration 66.7
262 TestNoKubernetes/serial/StartWithStopK8s 32.62
263 TestNoKubernetes/serial/Start 29.14
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
265 TestPause/serial/Pause 0.73
266 TestNoKubernetes/serial/ProfileList 18.56
267 TestPause/serial/VerifyStatus 0.3
268 TestPause/serial/Unpause 0.77
269 TestPause/serial/PauseAgain 0.84
270 TestPause/serial/DeletePaused 1.11
271 TestPause/serial/VerifyDeletedResources 14.36
272 TestStoppedBinaryUpgrade/Setup 0.44
273 TestNoKubernetes/serial/Stop 2.45
274 TestStoppedBinaryUpgrade/Upgrade 232.16
275 TestNoKubernetes/serial/StartNoArgs 25.69
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
278 TestStartStop/group/old-k8s-version/serial/FirstStart 193.08
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.4
281 TestStartStop/group/no-preload/serial/FirstStart 117
283 TestStartStop/group/embed-certs/serial/FirstStart 93.2
284 TestStartStop/group/no-preload/serial/DeployApp 9.49
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
286 TestStartStop/group/no-preload/serial/Stop 13.13
287 TestStartStop/group/embed-certs/serial/DeployApp 9.41
288 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
290 TestStartStop/group/no-preload/serial/SecondStart 315.48
291 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
292 TestStartStop/group/embed-certs/serial/Stop 13.13
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
294 TestStartStop/group/old-k8s-version/serial/Stop 13.14
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
296 TestStartStop/group/embed-certs/serial/SecondStart 323.29
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
298 TestStartStop/group/old-k8s-version/serial/SecondStart 485.52
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 123.16
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.21
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
307 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
309 TestStartStop/group/no-preload/serial/Pause 2.79
311 TestStartStop/group/newest-cni/serial/FirstStart 76.22
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
314 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
315 TestStartStop/group/embed-certs/serial/Pause 2.59
316 TestNetworkPlugins/group/auto/Start 78.61
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
319 TestStartStop/group/newest-cni/serial/Stop 13.14
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
321 TestStartStop/group/newest-cni/serial/SecondStart 52.7
322 TestNetworkPlugins/group/auto/KubeletFlags 0.31
323 TestNetworkPlugins/group/auto/NetCatPod 12.5
324 TestNetworkPlugins/group/auto/DNS 0.19
325 TestNetworkPlugins/group/auto/Localhost 0.18
326 TestNetworkPlugins/group/auto/HairPin 0.19
327 TestNetworkPlugins/group/flannel/Start 91.03
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
331 TestStartStop/group/newest-cni/serial/Pause 2.65
332 TestNetworkPlugins/group/enable-default-cni/Start 93.11
333 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
334 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
336 TestStartStop/group/old-k8s-version/serial/Pause 2.74
337 TestNetworkPlugins/group/bridge/Start 80.58
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
342 TestNetworkPlugins/group/flannel/ControllerPod 5.03
343 TestNetworkPlugins/group/kubenet/Start 83.51
344 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
345 TestNetworkPlugins/group/flannel/NetCatPod 15.44
346 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
347 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.42
348 TestNetworkPlugins/group/flannel/DNS 0.23
349 TestNetworkPlugins/group/flannel/Localhost 0.17
350 TestNetworkPlugins/group/flannel/HairPin 0.17
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
354 TestNetworkPlugins/group/kindnet/Start 94.92
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
356 TestNetworkPlugins/group/calico/Start 137.57
357 TestNetworkPlugins/group/bridge/NetCatPod 13.43
358 TestNetworkPlugins/group/bridge/DNS 0.18
359 TestNetworkPlugins/group/bridge/Localhost 0.17
360 TestNetworkPlugins/group/bridge/HairPin 0.15
361 TestNetworkPlugins/group/custom-flannel/Start 110.67
362 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
363 TestNetworkPlugins/group/kubenet/NetCatPod 14.36
364 TestNetworkPlugins/group/kubenet/DNS 0.18
365 TestNetworkPlugins/group/kubenet/Localhost 0.15
366 TestNetworkPlugins/group/kubenet/HairPin 0.16
367 TestNetworkPlugins/group/false/Start 93.6
368 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
370 TestNetworkPlugins/group/kindnet/NetCatPod 13.37
371 TestNetworkPlugins/group/kindnet/DNS 0.22
372 TestNetworkPlugins/group/kindnet/Localhost 0.25
373 TestNetworkPlugins/group/kindnet/HairPin 0.25
374 TestNetworkPlugins/group/calico/ControllerPod 5.03
375 TestNetworkPlugins/group/calico/KubeletFlags 0.23
376 TestNetworkPlugins/group/calico/NetCatPod 12.36
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
379 TestNetworkPlugins/group/calico/DNS 0.22
380 TestNetworkPlugins/group/calico/Localhost 0.21
381 TestNetworkPlugins/group/calico/HairPin 0.18
382 TestNetworkPlugins/group/custom-flannel/DNS 0.21
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
385 TestNetworkPlugins/group/false/KubeletFlags 0.24
386 TestNetworkPlugins/group/false/NetCatPod 12.37
387 TestNetworkPlugins/group/false/DNS 0.19
388 TestNetworkPlugins/group/false/Localhost 0.16
389 TestNetworkPlugins/group/false/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (7.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-811598 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-811598 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (7.577753031s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-811598
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-811598: exit status 85 (74.181563ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-811598 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-811598        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 22:54:27
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 22:54:27.263123   85613 out.go:296] Setting OutFile to fd 1 ...
	I1009 22:54:27.263413   85613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:27.263424   85613 out.go:309] Setting ErrFile to fd 2...
	I1009 22:54:27.263429   85613 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:27.263601   85613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	W1009 22:54:27.263718   85613 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17375-78415/.minikube/config/config.json: open /home/jenkins/minikube-integration/17375-78415/.minikube/config/config.json: no such file or directory
	I1009 22:54:27.264269   85613 out.go:303] Setting JSON to true
	I1009 22:54:27.265133   85613 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9414,"bootTime":1696882653,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 22:54:27.265190   85613 start.go:138] virtualization: kvm guest
	I1009 22:54:27.267477   85613 out.go:97] [download-only-811598] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 22:54:27.268874   85613 out.go:169] MINIKUBE_LOCATION=17375
	W1009 22:54:27.267585   85613 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 22:54:27.267654   85613 notify.go:220] Checking for updates...
	I1009 22:54:27.271473   85613 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 22:54:27.272722   85613 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 22:54:27.273889   85613 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 22:54:27.275052   85613 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 22:54:27.277189   85613 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 22:54:27.277476   85613 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 22:54:27.311463   85613 out.go:97] Using the kvm2 driver based on user configuration
	I1009 22:54:27.311487   85613 start.go:298] selected driver: kvm2
	I1009 22:54:27.311494   85613 start.go:902] validating driver "kvm2" against <nil>
	I1009 22:54:27.311938   85613 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 22:54:27.312034   85613 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17375-78415/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 22:54:27.326412   85613 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1009 22:54:27.326498   85613 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 22:54:27.327201   85613 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1009 22:54:27.327414   85613 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 22:54:27.327487   85613 cni.go:84] Creating CNI manager for ""
	I1009 22:54:27.327507   85613 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1009 22:54:27.327525   85613 start_flags.go:323] config:
	{Name:download-only-811598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-811598 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 22:54:27.327855   85613 iso.go:125] acquiring lock: {Name:mk8f0545fb1f7801479f5eb65fbe7d8f303a99cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 22:54:27.329620   85613 out.go:97] Downloading VM boot image ...
	I1009 22:54:27.329661   85613 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17375-78415/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1009 22:54:29.742760   85613 out.go:97] Starting control plane node download-only-811598 in cluster download-only-811598
	I1009 22:54:29.742785   85613 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1009 22:54:29.774380   85613 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1009 22:54:29.774437   85613 cache.go:57] Caching tarball of preloaded images
	I1009 22:54:29.774628   85613 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1009 22:54:29.776541   85613 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1009 22:54:29.776565   85613 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1009 22:54:29.804743   85613 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17375-78415/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-811598"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-811598 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-811598 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (5.064606919s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-811598
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-811598: exit status 85 (70.362884ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-811598 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-811598        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-811598 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-811598        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 22:54:34
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 22:54:34.915550   85660 out.go:296] Setting OutFile to fd 1 ...
	I1009 22:54:34.915685   85660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:34.915698   85660 out.go:309] Setting ErrFile to fd 2...
	I1009 22:54:34.915706   85660 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:34.915892   85660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	W1009 22:54:34.916014   85660 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17375-78415/.minikube/config/config.json: open /home/jenkins/minikube-integration/17375-78415/.minikube/config/config.json: no such file or directory
	I1009 22:54:34.916429   85660 out.go:303] Setting JSON to true
	I1009 22:54:34.917338   85660 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9422,"bootTime":1696882653,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 22:54:34.917394   85660 start.go:138] virtualization: kvm guest
	I1009 22:54:34.919304   85660 out.go:97] [download-only-811598] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 22:54:34.920725   85660 out.go:169] MINIKUBE_LOCATION=17375
	I1009 22:54:34.919538   85660 notify.go:220] Checking for updates...
	I1009 22:54:34.923581   85660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 22:54:34.925144   85660 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 22:54:34.926692   85660 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 22:54:34.928005   85660 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-811598"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-811598
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-292354 --alsologtostderr --binary-mirror http://127.0.0.1:41815 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-292354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-292354
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (100.73s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-396869 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-396869 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m39.712838745s)
helpers_test.go:175: Cleaning up "offline-docker-396869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-396869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-396869: (1.021601043s)
--- PASS: TestOffline (100.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-229072
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-229072: exit status 85 (66.083649ms)

                                                
                                                
-- stdout --
	* Profile "addons-229072" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-229072"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-229072
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-229072: exit status 85 (65.557949ms)

                                                
                                                
-- stdout --
	* Profile "addons-229072" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-229072"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (157.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-229072 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-229072 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m37.151553006s)
--- PASS: TestAddons/Setup (157.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 23.53618ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-55vxq" [ed58a2c1-b26a-40b2-b00a-e93e0ebdf76c] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020924334s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f9pl2" [a3796c27-6f14-4b1d-a8bc-6e6bd0fb4508] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012746061s
addons_test.go:339: (dbg) Run:  kubectl --context addons-229072 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-229072 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-229072 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.031537975s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 ip
2023/10/09 22:57:33 [DEBUG] GET http://192.168.39.17:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-229072 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-229072 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-229072 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eee88238-2790-4a75-ad71-3edba461fe34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eee88238-2790-4a75-ad71-3edba461fe34] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.02112209s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-229072 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.17
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-229072 addons disable ingress-dns --alsologtostderr -v=1: (1.84127986s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-229072 addons disable ingress --alsologtostderr -v=1: (7.745501428s)
--- PASS: TestAddons/parallel/Ingress (27.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k9ctw" [671bff1f-10d4-40a4-9dc5-953e20b38ad8] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012342007s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-229072
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-229072: (5.82930417s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 23.610449ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zp64g" [3bf9a532-0972-4a22-aa3d-5f52b3a07fe9] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019357812s
addons_test.go:414: (dbg) Run:  kubectl --context addons-229072 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.696264ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-xnfbx" [20dcc7e9-47d6-4d73-8781-da413cefa17f] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.017606625s
addons_test.go:472: (dbg) Run:  kubectl --context addons-229072 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-229072 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.872841273s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p addons-229072 addons disable helm-tiller --alsologtostderr -v=1: (1.000263262s)
--- PASS: TestAddons/parallel/HelmTiller (11.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (97.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 24.702887ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-229072 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-229072 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b8f9aa41-8dd6-4a2e-bc46-f35df0d3e7a0] Pending
helpers_test.go:344: "task-pv-pod" [b8f9aa41-8dd6-4a2e-bc46-f35df0d3e7a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b8f9aa41-8dd6-4a2e-bc46-f35df0d3e7a0] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.019006171s
addons_test.go:583: (dbg) Run:  kubectl --context addons-229072 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-229072 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-229072 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-229072 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-229072 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-229072 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-229072 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-229072 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e5340744-24f0-4ce6-a5d6-5b3aaf2aa967] Pending
helpers_test.go:344: "task-pv-pod-restore" [e5340744-24f0-4ce6-a5d6-5b3aaf2aa967] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e5340744-24f0-4ce6-a5d6-5b3aaf2aa967] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.024544892s
addons_test.go:625: (dbg) Run:  kubectl --context addons-229072 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-229072 delete pod task-pv-pod-restore: (1.021529542s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-229072 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-229072 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-229072 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.705866575s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (97.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-229072 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-229072 --alsologtostderr -v=1: (1.901929107s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-5nldp" [b40bca75-936d-487d-adec-5d26299b6760] Pending
helpers_test.go:344: "headlamp-94b766c-5nldp" [b40bca75-936d-487d-adec-5d26299b6760] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-5nldp" [b40bca75-936d-487d-adec-5d26299b6760] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.021962396s
--- PASS: TestAddons/parallel/Headlamp (14.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-976pk" [8c013b1c-e976-4886-a840-606e1987a4ad] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010577948s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-229072
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-229072 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-229072 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f6e86820-a2eb-47c1-9c40-3c5914fdbfa4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f6e86820-a2eb-47c1-9c40-3c5914fdbfa4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f6e86820-a2eb-47c1-9c40-3c5914fdbfa4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010341762s
addons_test.go:890: (dbg) Run:  kubectl --context addons-229072 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 ssh "cat /opt/local-path-provisioner/pvc-e3458172-deac-4e30-b9c1-cea8189cac42_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-229072 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-229072 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-229072 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f7lhb" [d63ce559-eb35-4fdd-bf78-c462ea77eace] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.021073113s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-229072
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-229072 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-229072 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-229072
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-229072: (13.112009245s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-229072
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-229072
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-229072
--- PASS: TestAddons/StoppedEnableDisable (13.42s)

                                                
                                    
x
+
TestCertOptions (118.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-659895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-659895 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m56.321011762s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-659895 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-659895 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-659895 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-659895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-659895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-659895: (1.135609682s)
--- PASS: TestCertOptions (118.12s)

                                                
                                    
x
+
TestCertExpiration (309.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-814725 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-814725 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m23.567233936s)
E1009 23:36:56.581924   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-814725 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-814725 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (44.376450413s)
helpers_test.go:175: Cleaning up "cert-expiration-814725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-814725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-814725: (1.083184193s)
--- PASS: TestCertExpiration (309.03s)

                                                
                                    
x
+
TestDockerFlags (87.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-148802 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-148802 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m25.624205106s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-148802 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-148802 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-148802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-148802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-148802: (1.174581865s)
--- PASS: TestDockerFlags (87.36s)

                                                
                                    
x
+
TestForceSystemdFlag (63.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-262186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E1009 23:34:53.699864   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-262186 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m2.852854134s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-262186 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-262186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-262186
--- PASS: TestForceSystemdFlag (63.96s)

                                                
                                    
x
+
TestForceSystemdEnv (89.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-971782 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1009 23:34:12.739224   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:12.744537   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:12.754931   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:12.775299   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:12.815630   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:12.896026   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:13.056449   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:13.376703   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:14.017795   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:15.298052   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:17.858608   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:22.979019   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:34:33.219191   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-971782 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m28.184163125s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-971782 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-971782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-971782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-971782: (1.088666198s)
--- PASS: TestForceSystemdEnv (89.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.13s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.13s)

                                                
                                    
x
+
TestErrorSpam/setup (50.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-184378 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-184378 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-184378 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-184378 --driver=kvm2 : (50.366805039s)
--- PASS: TestErrorSpam/setup (50.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (4.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 stop: (4.095824995s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-184378 --log_dir /tmp/nospam-184378 stop
--- PASS: TestErrorSpam/stop (4.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17375-78415/.minikube/files/etc/test/nested/copy/85601/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-964126 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.374162916s)
--- PASS: TestFunctional/serial/StartWithProxy (65.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-964126 --alsologtostderr -v=8: (37.407853225s)
functional_test.go:659: soft start took 37.408600291s for "functional-964126" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-964126 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-964126 /tmp/TestFunctionalserialCacheCmdcacheadd_local3098437496/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache add minikube-local-cache-test:functional-964126
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 cache add minikube-local-cache-test:functional-964126: (1.014630787s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache delete minikube-local-cache-test:functional-964126
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-964126
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (255.608939ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 kubectl -- --context functional-964126 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-964126 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 23:02:18.208857   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.214742   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.224996   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.245299   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.285548   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.365955   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.526386   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:18.846986   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:19.487991   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:20.768461   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:23.330259   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:28.450960   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:02:38.691210   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-964126 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.331187204s)
functional_test.go:757: restart took 38.33132637s for "functional-964126" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-964126 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 logs: (1.130395219s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 logs --file /tmp/TestFunctionalserialLogsFileCmd1812355393/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 logs --file /tmp/TestFunctionalserialLogsFileCmd1812355393/001/logs.txt: (1.080049196s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-964126 apply -f testdata/invalidsvc.yaml
E1009 23:02:59.171579   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-964126
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-964126: exit status 115 (286.596984ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.7:32014 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-964126 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 config get cpus: exit status 14 (89.903159ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 config get cpus: exit status 14 (69.281832ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-964126 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-964126 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 91622: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-964126 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (173.042341ms)

                                                
                                                
-- stdout --
	* [functional-964126] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:03:01.888128   91364 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:03:01.888294   91364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:03:01.888306   91364 out.go:309] Setting ErrFile to fd 2...
	I1009 23:03:01.888313   91364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:03:01.888621   91364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:03:01.889369   91364 out.go:303] Setting JSON to false
	I1009 23:03:01.890710   91364 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9929,"bootTime":1696882653,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 23:03:01.890790   91364 start.go:138] virtualization: kvm guest
	I1009 23:03:01.894261   91364 out.go:177] * [functional-964126] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1009 23:03:01.896044   91364 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:03:01.896055   91364 notify.go:220] Checking for updates...
	I1009 23:03:01.897611   91364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:03:01.899195   91364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:03:01.900607   91364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:03:01.902067   91364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 23:03:01.903412   91364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:03:01.905307   91364 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:03:01.905883   91364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:03:01.905972   91364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:03:01.922339   91364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42341
	I1009 23:03:01.922826   91364 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:03:01.923402   91364 main.go:141] libmachine: Using API Version  1
	I1009 23:03:01.923432   91364 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:03:01.923821   91364 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:03:01.924015   91364 main.go:141] libmachine: (functional-964126) Calling .DriverName
	I1009 23:03:01.924334   91364 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:03:01.924835   91364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:03:01.924882   91364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:03:01.943512   91364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I1009 23:03:01.943957   91364 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:03:01.944506   91364 main.go:141] libmachine: Using API Version  1
	I1009 23:03:01.944547   91364 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:03:01.944882   91364 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:03:01.945091   91364 main.go:141] libmachine: (functional-964126) Calling .DriverName
	I1009 23:03:01.979647   91364 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 23:03:01.981084   91364 start.go:298] selected driver: kvm2
	I1009 23:03:01.981103   91364 start.go:902] validating driver "kvm2" against &{Name:functional-964126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-964126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.7 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:03:01.981246   91364 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:03:01.983742   91364 out.go:177] 
	W1009 23:03:01.985449   91364 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 23:03:01.986828   91364 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-964126 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-964126 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (163.196609ms)

                                                
                                                
-- stdout --
	* [functional-964126] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:03:01.724841   91313 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:03:01.724952   91313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:03:01.724967   91313 out.go:309] Setting ErrFile to fd 2...
	I1009 23:03:01.724974   91313 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:03:01.725397   91313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:03:01.726046   91313 out.go:303] Setting JSON to false
	I1009 23:03:01.727250   91313 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9929,"bootTime":1696882653,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 23:03:01.727334   91313 start.go:138] virtualization: kvm guest
	I1009 23:03:01.729837   91313 out.go:177] * [functional-964126] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1009 23:03:01.731502   91313 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:03:01.731513   91313 notify.go:220] Checking for updates...
	I1009 23:03:01.733037   91313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:03:01.734751   91313 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	I1009 23:03:01.736322   91313 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	I1009 23:03:01.737645   91313 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 23:03:01.739091   91313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:03:01.740993   91313 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:03:01.741659   91313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:03:01.741717   91313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:03:01.757944   91313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I1009 23:03:01.758385   91313 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:03:01.758972   91313 main.go:141] libmachine: Using API Version  1
	I1009 23:03:01.759004   91313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:03:01.759414   91313 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:03:01.759621   91313 main.go:141] libmachine: (functional-964126) Calling .DriverName
	I1009 23:03:01.759914   91313 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:03:01.760213   91313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:03:01.760259   91313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:03:01.775104   91313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1009 23:03:01.775472   91313 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:03:01.775931   91313 main.go:141] libmachine: Using API Version  1
	I1009 23:03:01.775949   91313 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:03:01.776201   91313 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:03:01.776429   91313 main.go:141] libmachine: (functional-964126) Calling .DriverName
	I1009 23:03:01.806892   91313 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1009 23:03:01.808513   91313 start.go:298] selected driver: kvm2
	I1009 23:03:01.808525   91313 start.go:902] validating driver "kvm2" against &{Name:functional-964126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-964126 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.7 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:03:01.808659   91313 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:03:01.811030   91313 out.go:177] 
	W1009 23:03:01.812290   91313 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 23:03:01.813605   91313 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-964126 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-964126 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-9lclx" [39b19938-ff08-4d7b-bd45-5ef7fb8e1d25] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-9lclx" [39b19938-ff08-4d7b-bd45-5ef7fb8e1d25] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.017126233s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.7:30904
functional_test.go:1674: http://192.168.50.7:30904: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-9lclx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.7:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.7:30904
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bdc36a2f-4e40-45a9-9882-251d4cfefe5d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015752339s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-964126 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-964126 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-964126 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-964126 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e77ffa80-4a2e-4681-b030-2b4a07464d15] Pending
helpers_test.go:344: "sp-pod" [e77ffa80-4a2e-4681-b030-2b4a07464d15] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e77ffa80-4a2e-4681-b030-2b4a07464d15] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.017351332s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-964126 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-964126 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-964126 delete -f testdata/storage-provisioner/pod.yaml: (2.817211119s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-964126 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6453c169-e50d-4f66-b271-a88a27c6944c] Pending
helpers_test.go:344: "sp-pod" [6453c169-e50d-4f66-b271-a88a27c6944c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6453c169-e50d-4f66-b271-a88a27c6944c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.011914549s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-964126 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh -n functional-964126 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 cp functional-964126:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2379548386/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh -n functional-964126 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (43.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-964126 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-z7r45" [4acb3b99-3872-4fdd-bf54-c593699f0b9a] Pending
helpers_test.go:344: "mysql-859648c796-z7r45" [4acb3b99-3872-4fdd-bf54-c593699f0b9a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-z7r45" [4acb3b99-3872-4fdd-bf54-c593699f0b9a] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 37.031447451s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;": exit status 1 (222.344741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;": exit status 1 (234.541942ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;": exit status 1 (144.862897ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-964126 exec mysql-859648c796-z7r45 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (43.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/85601/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /etc/test/nested/copy/85601/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/85601.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /etc/ssl/certs/85601.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/85601.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /usr/share/ca-certificates/85601.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/856012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /etc/ssl/certs/856012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/856012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /usr/share/ca-certificates/856012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-964126 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh "sudo systemctl is-active crio": exit status 1 (230.158271ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-964126 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-964126 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nt72z" [c57e2a80-fbf3-47c6-a66a-6c17397eb6c3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nt72z" [c57e2a80-fbf3-47c6-a66a-6c17397eb6c3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.028585915s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "291.886572ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "61.378732ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdany-port3591115367/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696892581065009616" to /tmp/TestFunctionalparallelMountCmdany-port3591115367/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696892581065009616" to /tmp/TestFunctionalparallelMountCmdany-port3591115367/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696892581065009616" to /tmp/TestFunctionalparallelMountCmdany-port3591115367/001/test-1696892581065009616
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.208959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 23:03 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 23:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 23:03 test-1696892581065009616
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh cat /mount-9p/test-1696892581065009616
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-964126 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [298f4a8c-03b5-44cf-93f7-56b488863a93] Pending
helpers_test.go:344: "busybox-mount" [298f4a8c-03b5-44cf-93f7-56b488863a93] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [298f4a8c-03b5-44cf-93f7-56b488863a93] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [298f4a8c-03b5-44cf-93f7-56b488863a93] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.01651428s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-964126 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdany-port3591115367/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "240.919967ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "61.718788ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdspecific-port986423592/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.131157ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdspecific-port986423592/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh "sudo umount -f /mount-9p": exit status 1 (207.871511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-964126 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdspecific-port986423592/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T" /mount1: exit status 1 (321.390322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-964126 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-964126 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3651766284/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service list -o json
functional_test.go:1493: Took "467.832971ms" to run "out/minikube-linux-amd64 -p functional-964126 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.7:30355
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.7:30355
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-964126 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-964126
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-964126
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-964126 image ls --format short --alsologtostderr:
I1009 23:03:36.932045   93372 out.go:296] Setting OutFile to fd 1 ...
I1009 23:03:36.932266   93372 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:36.932296   93372 out.go:309] Setting ErrFile to fd 2...
I1009 23:03:36.932308   93372 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:36.932539   93372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
I1009 23:03:36.933448   93372 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:36.933699   93372 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:36.934165   93372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:36.934264   93372 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:36.948557   93372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
I1009 23:03:36.949046   93372 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:36.949728   93372 main.go:141] libmachine: Using API Version  1
I1009 23:03:36.949753   93372 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:36.950090   93372 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:36.950301   93372 main.go:141] libmachine: (functional-964126) Calling .GetState
I1009 23:03:36.953513   93372 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:36.953576   93372 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:36.967425   93372 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
I1009 23:03:36.970970   93372 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:36.971544   93372 main.go:141] libmachine: Using API Version  1
I1009 23:03:36.971578   93372 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:36.972075   93372 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:36.972255   93372 main.go:141] libmachine: (functional-964126) Calling .DriverName
I1009 23:03:36.972465   93372 ssh_runner.go:195] Run: systemctl --version
I1009 23:03:36.972501   93372 main.go:141] libmachine: (functional-964126) Calling .GetSSHHostname
I1009 23:03:36.974688   93372 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:36.975110   93372 main.go:141] libmachine: (functional-964126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:fc:0b", ip: ""} in network mk-functional-964126: {Iface:virbr1 ExpiryTime:2023-10-10 00:00:43 +0000 UTC Type:0 Mac:52:54:00:65:fc:0b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-964126 Clientid:01:52:54:00:65:fc:0b}
I1009 23:03:36.975152   93372 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined IP address 192.168.50.7 and MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:36.975278   93372 main.go:141] libmachine: (functional-964126) Calling .GetSSHPort
I1009 23:03:36.975442   93372 main.go:141] libmachine: (functional-964126) Calling .GetSSHKeyPath
I1009 23:03:36.975570   93372 main.go:141] libmachine: (functional-964126) Calling .GetSSHUsername
I1009 23:03:36.975728   93372 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/functional-964126/id_rsa Username:docker}
I1009 23:03:37.097685   93372 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1009 23:03:37.185909   93372 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.185923   93372 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.186213   93372 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.186235   93372 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:37.186248   93372 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.186252   93372 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:37.186258   93372 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.186520   93372 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:37.186609   93372 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.186647   93372 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-964126 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-964126 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-964126 | 953368ea6045c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-964126 image ls --format table --alsologtostderr:
I1009 23:03:37.519676   93494 out.go:296] Setting OutFile to fd 1 ...
I1009 23:03:37.519937   93494 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.519948   93494 out.go:309] Setting ErrFile to fd 2...
I1009 23:03:37.519952   93494 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.520154   93494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
I1009 23:03:37.520725   93494 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.520833   93494 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.521251   93494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.521309   93494 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.535148   93494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
I1009 23:03:37.535620   93494 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.536276   93494 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.536299   93494 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.536628   93494 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.536837   93494 main.go:141] libmachine: (functional-964126) Calling .GetState
I1009 23:03:37.538786   93494 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.538832   93494 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.553261   93494 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44341
I1009 23:03:37.553614   93494 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.554082   93494 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.554121   93494 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.554436   93494 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.554598   93494 main.go:141] libmachine: (functional-964126) Calling .DriverName
I1009 23:03:37.554801   93494 ssh_runner.go:195] Run: systemctl --version
I1009 23:03:37.554833   93494 main.go:141] libmachine: (functional-964126) Calling .GetSSHHostname
I1009 23:03:37.557706   93494 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.558143   93494 main.go:141] libmachine: (functional-964126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:fc:0b", ip: ""} in network mk-functional-964126: {Iface:virbr1 ExpiryTime:2023-10-10 00:00:43 +0000 UTC Type:0 Mac:52:54:00:65:fc:0b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-964126 Clientid:01:52:54:00:65:fc:0b}
I1009 23:03:37.558171   93494 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined IP address 192.168.50.7 and MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.558345   93494 main.go:141] libmachine: (functional-964126) Calling .GetSSHPort
I1009 23:03:37.558525   93494 main.go:141] libmachine: (functional-964126) Calling .GetSSHKeyPath
I1009 23:03:37.558696   93494 main.go:141] libmachine: (functional-964126) Calling .GetSSHUsername
I1009 23:03:37.558826   93494 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/functional-964126/id_rsa Username:docker}
I1009 23:03:37.676742   93494 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1009 23:03:37.810183   93494 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.810203   93494 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.810514   93494 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.810541   93494 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:37.810570   93494 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.810586   93494 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.810617   93494 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:37.810839   93494 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:37.810865   93494 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.810904   93494 main.go:141] libmachine: Making call to close connection to plugin binary
E1009 23:03:40.132355   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-964126 image ls --format json --alsologtostderr:
[{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-964126"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"953368ea6045c8e9f6d11c2664d20f1b4f007ece414874
eb5f51275027f92d35","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-964126"],"size":"30"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987
919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-964126 image ls --format json --alsologtostderr:
I1009 23:03:37.253356   93431 out.go:296] Setting OutFile to fd 1 ...
I1009 23:03:37.253648   93431 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.253659   93431 out.go:309] Setting ErrFile to fd 2...
I1009 23:03:37.253664   93431 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.253830   93431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
I1009 23:03:37.254370   93431 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.254516   93431 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.255018   93431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.255086   93431 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.273982   93431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
I1009 23:03:37.274402   93431 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.274975   93431 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.274999   93431 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.275408   93431 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.275639   93431 main.go:141] libmachine: (functional-964126) Calling .GetState
I1009 23:03:37.277469   93431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.277503   93431 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.291248   93431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36797
I1009 23:03:37.291606   93431 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.292078   93431 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.292110   93431 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.292395   93431 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.292576   93431 main.go:141] libmachine: (functional-964126) Calling .DriverName
I1009 23:03:37.292762   93431 ssh_runner.go:195] Run: systemctl --version
I1009 23:03:37.292790   93431 main.go:141] libmachine: (functional-964126) Calling .GetSSHHostname
I1009 23:03:37.295343   93431 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.295779   93431 main.go:141] libmachine: (functional-964126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:fc:0b", ip: ""} in network mk-functional-964126: {Iface:virbr1 ExpiryTime:2023-10-10 00:00:43 +0000 UTC Type:0 Mac:52:54:00:65:fc:0b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-964126 Clientid:01:52:54:00:65:fc:0b}
I1009 23:03:37.295804   93431 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined IP address 192.168.50.7 and MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.295923   93431 main.go:141] libmachine: (functional-964126) Calling .GetSSHPort
I1009 23:03:37.296095   93431 main.go:141] libmachine: (functional-964126) Calling .GetSSHKeyPath
I1009 23:03:37.296239   93431 main.go:141] libmachine: (functional-964126) Calling .GetSSHUsername
I1009 23:03:37.296372   93431 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/functional-964126/id_rsa Username:docker}
I1009 23:03:37.406841   93431 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1009 23:03:37.455301   93431 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.455319   93431 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.455551   93431 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.455568   93431 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:37.455577   93431 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.455585   93431 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.455787   93431 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.455802   93431 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:37.455817   93431 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-964126 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 953368ea6045c8e9f6d11c2664d20f1b4f007ece414874eb5f51275027f92d35
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-964126
size: "30"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-964126
size: "32900000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-964126 image ls --format yaml --alsologtostderr:
I1009 23:03:36.935540   93373 out.go:296] Setting OutFile to fd 1 ...
I1009 23:03:36.935723   93373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:36.935736   93373 out.go:309] Setting ErrFile to fd 2...
I1009 23:03:36.935743   93373 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:36.936022   93373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
I1009 23:03:36.936747   93373 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:36.936912   93373 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:36.937445   93373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:36.937498   93373 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:36.950838   93373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
I1009 23:03:36.951259   93373 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:36.951772   93373 main.go:141] libmachine: Using API Version  1
I1009 23:03:36.951801   93373 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:36.952151   93373 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:36.952318   93373 main.go:141] libmachine: (functional-964126) Calling .GetState
I1009 23:03:36.953933   93373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:36.953965   93373 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:36.967433   93373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
I1009 23:03:36.967821   93373 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:36.968329   93373 main.go:141] libmachine: Using API Version  1
I1009 23:03:36.968358   93373 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:36.968661   93373 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:36.968813   93373 main.go:141] libmachine: (functional-964126) Calling .DriverName
I1009 23:03:36.969011   93373 ssh_runner.go:195] Run: systemctl --version
I1009 23:03:36.969045   93373 main.go:141] libmachine: (functional-964126) Calling .GetSSHHostname
I1009 23:03:36.971964   93373 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:36.972395   93373 main.go:141] libmachine: (functional-964126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:fc:0b", ip: ""} in network mk-functional-964126: {Iface:virbr1 ExpiryTime:2023-10-10 00:00:43 +0000 UTC Type:0 Mac:52:54:00:65:fc:0b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-964126 Clientid:01:52:54:00:65:fc:0b}
I1009 23:03:36.972438   93373 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined IP address 192.168.50.7 and MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:36.972583   93373 main.go:141] libmachine: (functional-964126) Calling .GetSSHPort
I1009 23:03:36.972741   93373 main.go:141] libmachine: (functional-964126) Calling .GetSSHKeyPath
I1009 23:03:36.972902   93373 main.go:141] libmachine: (functional-964126) Calling .GetSSHUsername
I1009 23:03:36.973059   93373 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/functional-964126/id_rsa Username:docker}
I1009 23:03:37.069098   93373 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1009 23:03:37.113772   93373 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.113789   93373 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.114079   93373 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:37.114122   93373 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.114141   93373 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:37.114152   93373 main.go:141] libmachine: Making call to close driver server
I1009 23:03:37.114164   93373 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:37.114423   93373 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:37.114442   93373 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-964126 ssh pgrep buildkitd: exit status 1 (223.881731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image build -t localhost/my-image:functional-964126 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image build -t localhost/my-image:functional-964126 testdata/build --alsologtostderr: (3.231832348s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-964126 image build -t localhost/my-image:functional-964126 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 723a7ac87d2e
Removing intermediate container 723a7ac87d2e
---> 57d4df49b2ed
Step 3/3 : ADD content.txt /
---> 68240e33a765
Successfully built 68240e33a765
Successfully tagged localhost/my-image:functional-964126
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-964126 image build -t localhost/my-image:functional-964126 testdata/build --alsologtostderr:
I1009 23:03:37.409576   93470 out.go:296] Setting OutFile to fd 1 ...
I1009 23:03:37.409730   93470 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.409746   93470 out.go:309] Setting ErrFile to fd 2...
I1009 23:03:37.409755   93470 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:03:37.410101   93470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
I1009 23:03:37.411002   93470 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.411691   93470 config.go:182] Loaded profile config "functional-964126": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1009 23:03:37.412307   93470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.412372   93470 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.426417   93470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
I1009 23:03:37.426915   93470 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.427562   93470 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.427591   93470 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.427931   93470 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.428122   93470 main.go:141] libmachine: (functional-964126) Calling .GetState
I1009 23:03:37.430056   93470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1009 23:03:37.430092   93470 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 23:03:37.443835   93470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46441
I1009 23:03:37.444272   93470 main.go:141] libmachine: () Calling .GetVersion
I1009 23:03:37.444777   93470 main.go:141] libmachine: Using API Version  1
I1009 23:03:37.444801   93470 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 23:03:37.445117   93470 main.go:141] libmachine: () Calling .GetMachineName
I1009 23:03:37.445313   93470 main.go:141] libmachine: (functional-964126) Calling .DriverName
I1009 23:03:37.445512   93470 ssh_runner.go:195] Run: systemctl --version
I1009 23:03:37.445544   93470 main.go:141] libmachine: (functional-964126) Calling .GetSSHHostname
I1009 23:03:37.448186   93470 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.448583   93470 main.go:141] libmachine: (functional-964126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:fc:0b", ip: ""} in network mk-functional-964126: {Iface:virbr1 ExpiryTime:2023-10-10 00:00:43 +0000 UTC Type:0 Mac:52:54:00:65:fc:0b Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:functional-964126 Clientid:01:52:54:00:65:fc:0b}
I1009 23:03:37.448634   93470 main.go:141] libmachine: (functional-964126) DBG | domain functional-964126 has defined IP address 192.168.50.7 and MAC address 52:54:00:65:fc:0b in network mk-functional-964126
I1009 23:03:37.448869   93470 main.go:141] libmachine: (functional-964126) Calling .GetSSHPort
I1009 23:03:37.449030   93470 main.go:141] libmachine: (functional-964126) Calling .GetSSHKeyPath
I1009 23:03:37.449197   93470 main.go:141] libmachine: (functional-964126) Calling .GetSSHUsername
I1009 23:03:37.449342   93470 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/functional-964126/id_rsa Username:docker}
I1009 23:03:37.548533   93470 build_images.go:151] Building image from path: /tmp/build.1145188766.tar
I1009 23:03:37.548587   93470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 23:03:37.571902   93470 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1145188766.tar
I1009 23:03:37.585210   93470 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1145188766.tar: stat -c "%s %y" /var/lib/minikube/build/build.1145188766.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1145188766.tar': No such file or directory
I1009 23:03:37.585253   93470 ssh_runner.go:362] scp /tmp/build.1145188766.tar --> /var/lib/minikube/build/build.1145188766.tar (3072 bytes)
I1009 23:03:37.623719   93470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1145188766
I1009 23:03:37.637834   93470 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1145188766 -xf /var/lib/minikube/build/build.1145188766.tar
I1009 23:03:37.663551   93470 docker.go:341] Building image: /var/lib/minikube/build/build.1145188766
I1009 23:03:37.663614   93470 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-964126 /var/lib/minikube/build/build.1145188766
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1009 23:03:40.546892   93470 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-964126 /var/lib/minikube/build/build.1145188766: (2.883250441s)
I1009 23:03:40.546977   93470 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1145188766
I1009 23:03:40.560369   93470 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1145188766.tar
I1009 23:03:40.572789   93470 build_images.go:207] Built localhost/my-image:functional-964126 from /tmp/build.1145188766.tar
I1009 23:03:40.572818   93470 build_images.go:123] succeeded building to: functional-964126
I1009 23:03:40.572824   93470 build_images.go:124] failed building to: 
I1009 23:03:40.572854   93470 main.go:141] libmachine: Making call to close driver server
I1009 23:03:40.572870   93470 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:40.573231   93470 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:40.573259   93470 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 23:03:40.573267   93470 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:40.573276   93470 main.go:141] libmachine: Making call to close driver server
I1009 23:03:40.573304   93470 main.go:141] libmachine: (functional-964126) Calling .Close
I1009 23:03:40.573530   93470 main.go:141] libmachine: Successfully made call to close driver server
I1009 23:03:40.573547   93470 main.go:141] libmachine: (functional-964126) DBG | Closing plugin on server side
I1009 23:03:40.573549   93470 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.501472309s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-964126
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr: (4.124904439s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-964126 docker-env) && out/minikube-linux-amd64 status -p functional-964126"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-964126 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr
2023/10/09 23:03:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr: (2.420496745s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.30653011s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-964126
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image load --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr: (3.770743652s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image save gcr.io/google-containers/addon-resizer:functional-964126 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image save gcr.io/google-containers/addon-resizer:functional-964126 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.15476452s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image rm gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (2.635636756s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-964126
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-964126 image save --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-964126 image save --daemon gcr.io/google-containers/addon-resizer:functional-964126 --alsologtostderr: (1.442461084s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-964126
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-964126
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-964126
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-964126
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (298.85s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-823224 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1009 23:31:26.185816   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-823224 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m40.904639482s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-823224 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-823224 cache add gcr.io/k8s-minikube/gvisor-addon:2: (20.704782165s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-823224 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-823224 addons enable gvisor: (3.379696083s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [0b70f216-c954-4901-9504-8e893e3646c6] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.022376401s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-823224 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [65de46a6-3fad-49bb-acac-df070290a75d] Pending
helpers_test.go:344: "nginx-gvisor" [65de46a6-3fad-49bb-acac-df070290a75d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [65de46a6-3fad-49bb-acac-df070290a75d] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 13.026543338s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-823224
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-823224: (1m32.412284213s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-823224 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-823224 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (51.028483202s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [0b70f216-c954-4901-9504-8e893e3646c6] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [0b70f216-c954-4901-9504-8e893e3646c6] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.022655696s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [65de46a6-3fad-49bb-acac-df070290a75d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.137000589s
helpers_test.go:175: Cleaning up "gvisor-823224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-823224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-823224: (1.916130961s)
--- PASS: TestGvisorAddon (298.85s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (50.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-795882 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-795882 --driver=kvm2 : (50.013172953s)
--- PASS: TestImageBuild/serial/Setup (50.01s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-795882
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-795882: (1.615689349s)
--- PASS: TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-795882
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-795882: (1.225189069s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-795882
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-795882
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-466192 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1009 23:05:02.053108   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-466192 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m13.674015617s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons enable ingress --alsologtostderr -v=5: (14.466591599s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-466192 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-466192 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.733661834s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-466192 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-466192 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d6fcf68c-0d4e-45bb-9b9b-e638a472e36b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d6fcf68c-0d4e-45bb-9b9b-e638a472e36b] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.023057751s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-466192 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.247
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons disable ingress-dns --alsologtostderr -v=1: (2.489294692s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-466192 addons disable ingress --alsologtostderr -v=1: (7.601268709s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (102.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-233426 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1009 23:07:18.208607   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:07:45.894398   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:08:00.834181   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:00.839449   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:00.849758   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:00.870018   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:00.910264   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:00.990536   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:01.150938   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:01.471499   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:02.112512   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:03.393259   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:05.954085   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:11.075121   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:21.316048   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:08:41.796614   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-233426 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m42.778359943s)
--- PASS: TestJSONOutput/start/Command (102.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-233426 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-233426 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-233426 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-233426 --output=json --user=testUser: (8.106785919s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-877541 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-877541 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.362787ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"afe50441-2175-452f-a927-306c8dc1ec36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-877541] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d294b4e-8b37-41ea-99fd-91b6b2211b5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17375"}}
	{"specversion":"1.0","id":"9a393df9-7d6e-43ec-af18-39f993f315d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1dcd27b-596c-4716-ad56-e4af3ac1ec27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig"}}
	{"specversion":"1.0","id":"52134866-5a58-4f43-89fa-e8e9690a85c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube"}}
	{"specversion":"1.0","id":"d159d93a-c9f6-4140-930e-272e5b86c0cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"275bd2b5-5ad7-43b9-9cc6-6c6a21bccf38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f4a6522-ba23-47da-b10d-61d566620fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-877541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-877541
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (107.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-620447 --driver=kvm2 
E1009 23:09:22.758087   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-620447 --driver=kvm2 : (53.657616746s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-625582 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-625582 --driver=kvm2 : (50.633977114s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-620447
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-625582
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-625582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-625582
helpers_test.go:175: Cleaning up "first-620447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-620447
--- PASS: TestMinikubeProfile (107.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-630242 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1009 23:10:44.680189   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-630242 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.319741371s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-630242 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-630242 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-658065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1009 23:11:26.186340   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.191606   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.201904   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.222206   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.262517   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.342821   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.503255   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:26.823897   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:27.464976   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:28.746170   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:31.307161   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:11:36.427393   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-658065 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.236966712s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-630242 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (11.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-658065
E1009 23:11:46.668129   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-658065: (11.268536994s)
--- PASS: TestMountStart/serial/Stop (11.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-658065
E1009 23:12:07.148470   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-658065: (22.477603365s)
E1009 23:12:18.208969   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (23.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-658065 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-921619 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1009 23:12:48.108787   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:13:00.834651   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:13:28.521263   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:14:10.029430   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-921619 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m9.693465726s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-921619 -- rollout status deployment/busybox: (3.189403848s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-6xrrs -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-pbmjv -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-6xrrs -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-pbmjv -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-6xrrs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-pbmjv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-6xrrs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-6xrrs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-pbmjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-921619 -- exec busybox-5bc68d56bd-pbmjv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-921619 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-921619 -v 3 --alsologtostderr: (45.609640091s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp testdata/cp-test.txt multinode-921619:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2293928982/001/cp-test_multinode-921619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619:/home/docker/cp-test.txt multinode-921619-m02:/home/docker/cp-test_multinode-921619_multinode-921619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test_multinode-921619_multinode-921619-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619:/home/docker/cp-test.txt multinode-921619-m03:/home/docker/cp-test_multinode-921619_multinode-921619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test_multinode-921619_multinode-921619-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp testdata/cp-test.txt multinode-921619-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2293928982/001/cp-test_multinode-921619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m02:/home/docker/cp-test.txt multinode-921619:/home/docker/cp-test_multinode-921619-m02_multinode-921619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test_multinode-921619-m02_multinode-921619.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m02:/home/docker/cp-test.txt multinode-921619-m03:/home/docker/cp-test_multinode-921619-m02_multinode-921619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test_multinode-921619-m02_multinode-921619-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp testdata/cp-test.txt multinode-921619-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2293928982/001/cp-test_multinode-921619-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt multinode-921619:/home/docker/cp-test_multinode-921619-m03_multinode-921619.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619 "sudo cat /home/docker/cp-test_multinode-921619-m03_multinode-921619.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 cp multinode-921619-m03:/home/docker/cp-test.txt multinode-921619-m02:/home/docker/cp-test_multinode-921619-m03_multinode-921619-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 ssh -n multinode-921619-m02 "sudo cat /home/docker/cp-test_multinode-921619-m03_multinode-921619-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-921619 node stop m03: (3.092998703s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-921619 status: exit status 7 (438.724465ms)

                                                
                                                
-- stdout --
	multinode-921619
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-921619-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-921619-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr: exit status 7 (457.467116ms)

                                                
                                                
-- stdout --
	multinode-921619
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-921619-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-921619-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:15:33.819079  100580 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:15:33.819313  100580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:15:33.819321  100580 out.go:309] Setting ErrFile to fd 2...
	I1009 23:15:33.819326  100580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:15:33.819506  100580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:15:33.819659  100580 out.go:303] Setting JSON to false
	I1009 23:15:33.819699  100580 mustload.go:65] Loading cluster: multinode-921619
	I1009 23:15:33.819798  100580 notify.go:220] Checking for updates...
	I1009 23:15:33.820081  100580 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:15:33.820095  100580 status.go:255] checking status of multinode-921619 ...
	I1009 23:15:33.820456  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:33.820515  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:33.841577  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I1009 23:15:33.841969  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:33.842593  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:33.842618  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:33.842952  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:33.843147  100580 main.go:141] libmachine: (multinode-921619) Calling .GetState
	I1009 23:15:33.844516  100580 status.go:330] multinode-921619 host status = "Running" (err=<nil>)
	I1009 23:15:33.844532  100580 host.go:66] Checking if "multinode-921619" exists ...
	I1009 23:15:33.844824  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:33.844871  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:33.859838  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I1009 23:15:33.860178  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:33.860604  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:33.860626  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:33.860903  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:33.861080  100580 main.go:141] libmachine: (multinode-921619) Calling .GetIP
	I1009 23:15:33.863967  100580 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:15:33.864430  100580 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:12:35 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:15:33.864460  100580 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:15:33.864589  100580 host.go:66] Checking if "multinode-921619" exists ...
	I1009 23:15:33.864860  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:33.864912  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:33.880290  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I1009 23:15:33.880674  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:33.881137  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:33.881159  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:33.881474  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:33.881638  100580 main.go:141] libmachine: (multinode-921619) Calling .DriverName
	I1009 23:15:33.881824  100580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:15:33.881846  100580 main.go:141] libmachine: (multinode-921619) Calling .GetSSHHostname
	I1009 23:15:33.884482  100580 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:15:33.884845  100580 main.go:141] libmachine: (multinode-921619) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:2b:27", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:12:35 +0000 UTC Type:0 Mac:52:54:00:65:2b:27 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:multinode-921619 Clientid:01:52:54:00:65:2b:27}
	I1009 23:15:33.884870  100580 main.go:141] libmachine: (multinode-921619) DBG | domain multinode-921619 has defined IP address 192.168.39.167 and MAC address 52:54:00:65:2b:27 in network mk-multinode-921619
	I1009 23:15:33.885006  100580 main.go:141] libmachine: (multinode-921619) Calling .GetSSHPort
	I1009 23:15:33.885165  100580 main.go:141] libmachine: (multinode-921619) Calling .GetSSHKeyPath
	I1009 23:15:33.885306  100580 main.go:141] libmachine: (multinode-921619) Calling .GetSSHUsername
	I1009 23:15:33.885422  100580 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619/id_rsa Username:docker}
	I1009 23:15:33.971373  100580 ssh_runner.go:195] Run: systemctl --version
	I1009 23:15:33.977312  100580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:15:33.991740  100580 kubeconfig.go:92] found "multinode-921619" server: "https://192.168.39.167:8443"
	I1009 23:15:33.991768  100580 api_server.go:166] Checking apiserver status ...
	I1009 23:15:33.991810  100580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:15:34.008466  100580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1839/cgroup
	I1009 23:15:34.019323  100580 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/pod3992fff0ca56642e7b8e9139e8dd6a1b/6807030f028b18563b79fa23e45d056e216882e167ca51b7c1e817f7518814c0"
	I1009 23:15:34.019386  100580 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod3992fff0ca56642e7b8e9139e8dd6a1b/6807030f028b18563b79fa23e45d056e216882e167ca51b7c1e817f7518814c0/freezer.state
	I1009 23:15:34.030720  100580 api_server.go:204] freezer state: "THAWED"
	I1009 23:15:34.030739  100580 api_server.go:253] Checking apiserver healthz at https://192.168.39.167:8443/healthz ...
	I1009 23:15:34.035625  100580 api_server.go:279] https://192.168.39.167:8443/healthz returned 200:
	ok
	I1009 23:15:34.035650  100580 status.go:421] multinode-921619 apiserver status = Running (err=<nil>)
	I1009 23:15:34.035663  100580 status.go:257] multinode-921619 status: &{Name:multinode-921619 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:15:34.035689  100580 status.go:255] checking status of multinode-921619-m02 ...
	I1009 23:15:34.035967  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:34.036006  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:34.050926  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I1009 23:15:34.051289  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:34.051737  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:34.051762  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:34.052058  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:34.052223  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetState
	I1009 23:15:34.053640  100580 status.go:330] multinode-921619-m02 host status = "Running" (err=<nil>)
	I1009 23:15:34.053658  100580 host.go:66] Checking if "multinode-921619-m02" exists ...
	I1009 23:15:34.053931  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:34.053969  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:34.069718  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I1009 23:15:34.070060  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:34.070476  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:34.070500  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:34.070777  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:34.070953  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetIP
	I1009 23:15:34.073223  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:15:34.073590  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:13:55 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:15:34.073636  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:15:34.073969  100580 host.go:66] Checking if "multinode-921619-m02" exists ...
	I1009 23:15:34.074235  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:34.074268  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:34.088548  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I1009 23:15:34.088882  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:34.089287  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:34.089331  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:34.089610  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:34.089777  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .DriverName
	I1009 23:15:34.089964  100580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:15:34.089986  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHHostname
	I1009 23:15:34.092335  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:15:34.092720  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:ca:45", ip: ""} in network mk-multinode-921619: {Iface:virbr1 ExpiryTime:2023-10-10 00:13:55 +0000 UTC Type:0 Mac:52:54:00:56:ca:45 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:multinode-921619-m02 Clientid:01:52:54:00:56:ca:45}
	I1009 23:15:34.092744  100580 main.go:141] libmachine: (multinode-921619-m02) DBG | domain multinode-921619-m02 has defined IP address 192.168.39.121 and MAC address 52:54:00:56:ca:45 in network mk-multinode-921619
	I1009 23:15:34.092866  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHPort
	I1009 23:15:34.093034  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHKeyPath
	I1009 23:15:34.093179  100580 main.go:141] libmachine: (multinode-921619-m02) Calling .GetSSHUsername
	I1009 23:15:34.093308  100580 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17375-78415/.minikube/machines/multinode-921619-m02/id_rsa Username:docker}
	I1009 23:15:34.185229  100580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:15:34.196978  100580 status.go:257] multinode-921619-m02 status: &{Name:multinode-921619-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:15:34.197020  100580 status.go:255] checking status of multinode-921619-m03 ...
	I1009 23:15:34.197397  100580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:15:34.197439  100580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:15:34.212275  100580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I1009 23:15:34.212657  100580 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:15:34.213097  100580 main.go:141] libmachine: Using API Version  1
	I1009 23:15:34.213125  100580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:15:34.213471  100580 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:15:34.213646  100580 main.go:141] libmachine: (multinode-921619-m03) Calling .GetState
	I1009 23:15:34.215021  100580 status.go:330] multinode-921619-m03 host status = "Stopped" (err=<nil>)
	I1009 23:15:34.215037  100580 status.go:343] host is not running, skipping remaining checks
	I1009 23:15:34.215044  100580 status.go:257] multinode-921619-m03 status: &{Name:multinode-921619-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-921619 node start m03 --alsologtostderr: (31.542354003s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (188.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-921619
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-921619
E1009 23:16:26.185982   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-921619: (28.507255914s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-921619 --wait=true -v=8 --alsologtostderr
E1009 23:16:53.870642   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:17:18.208979   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:18:00.834634   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:18:41.255143   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-921619 --wait=true -v=8 --alsologtostderr: (2m40.077098118s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-921619
--- PASS: TestMultiNode/serial/RestartKeepsNodes (188.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-921619 node delete m03: (1.202654025s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-921619 stop: (25.411830365s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-921619 status: exit status 7 (93.627343ms)

                                                
                                                
-- stdout --
	multinode-921619
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-921619-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-921619 status --alsologtostderr: exit status 7 (93.354665ms)

                                                
                                                
-- stdout --
	multinode-921619
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-921619-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:19:42.459984  102477 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:19:42.460221  102477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.460229  102477 out.go:309] Setting ErrFile to fd 2...
	I1009 23:19:42.460242  102477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:19:42.460435  102477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-78415/.minikube/bin
	I1009 23:19:42.460602  102477 out.go:303] Setting JSON to false
	I1009 23:19:42.460643  102477 mustload.go:65] Loading cluster: multinode-921619
	I1009 23:19:42.460746  102477 notify.go:220] Checking for updates...
	I1009 23:19:42.461069  102477 config.go:182] Loaded profile config "multinode-921619": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1009 23:19:42.461084  102477 status.go:255] checking status of multinode-921619 ...
	I1009 23:19:42.461469  102477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.461538  102477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.475791  102477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36737
	I1009 23:19:42.476268  102477 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.476811  102477 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.476836  102477 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.477291  102477 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.477496  102477 main.go:141] libmachine: (multinode-921619) Calling .GetState
	I1009 23:19:42.479248  102477 status.go:330] multinode-921619 host status = "Stopped" (err=<nil>)
	I1009 23:19:42.479267  102477 status.go:343] host is not running, skipping remaining checks
	I1009 23:19:42.479273  102477 status.go:257] multinode-921619 status: &{Name:multinode-921619 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:19:42.479315  102477 status.go:255] checking status of multinode-921619-m02 ...
	I1009 23:19:42.479575  102477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1009 23:19:42.479607  102477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 23:19:42.493305  102477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I1009 23:19:42.493680  102477 main.go:141] libmachine: () Calling .GetVersion
	I1009 23:19:42.494097  102477 main.go:141] libmachine: Using API Version  1
	I1009 23:19:42.494119  102477 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 23:19:42.494405  102477 main.go:141] libmachine: () Calling .GetMachineName
	I1009 23:19:42.494582  102477 main.go:141] libmachine: (multinode-921619-m02) Calling .GetState
	I1009 23:19:42.496064  102477 status.go:330] multinode-921619-m02 host status = "Stopped" (err=<nil>)
	I1009 23:19:42.496077  102477 status.go:343] host is not running, skipping remaining checks
	I1009 23:19:42.496082  102477 status.go:257] multinode-921619-m02 status: &{Name:multinode-921619-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.60s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-921619
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-921619-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-921619-m02 --driver=kvm2 : exit status 14 (79.011455ms)

                                                
                                                
-- stdout --
	* [multinode-921619-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-921619-m02' is duplicated with machine name 'multinode-921619-m02' in profile 'multinode-921619'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-921619-m03 --driver=kvm2 
E1009 23:21:26.185893   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-921619-m03 --driver=kvm2 : (53.142482438s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-921619
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-921619: exit status 80 (235.533682ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-921619
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-921619-m03 already exists in multinode-921619-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-921619-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.31s)

                                                
                                    
x
+
TestPreload (170.59s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-716167 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1009 23:22:18.208260   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:23:00.833903   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-716167 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m27.610369683s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-716167 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-716167 image pull gcr.io/k8s-minikube/busybox: (1.305590201s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-716167
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-716167: (13.114434366s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-716167 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1009 23:24:23.882102   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-716167 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m7.289752321s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-716167 image list
helpers_test.go:175: Cleaning up "test-preload-716167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-716167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-716167: (1.050239606s)
--- PASS: TestPreload (170.59s)

                                                
                                    
x
+
TestScheduledStopUnix (122.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-042535 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-042535 --memory=2048 --driver=kvm2 : (50.486607859s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-042535 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-042535 -n scheduled-stop-042535
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-042535 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-042535 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-042535 -n scheduled-stop-042535
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-042535
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-042535 --schedule 15s
E1009 23:26:26.186258   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-042535
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-042535: exit status 7 (76.882461ms)

                                                
                                                
-- stdout --
	scheduled-stop-042535
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-042535 -n scheduled-stop-042535
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-042535 -n scheduled-stop-042535: exit status 7 (75.361974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-042535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-042535
--- PASS: TestScheduledStopUnix (122.22s)

                                                
                                    
x
+
TestSkaffold (140.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3048430879 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-961234 --memory=2600 --driver=kvm2 
E1009 23:27:18.208892   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:27:49.231242   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-961234 --memory=2600 --driver=kvm2 : (50.067758501s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3048430879 run --minikube-profile skaffold-961234 --kube-context skaffold-961234 --status-check=true --port-forward=false --interactive=false
E1009 23:28:00.834719   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3048430879 run --minikube-profile skaffold-961234 --kube-context skaffold-961234 --status-check=true --port-forward=false --interactive=false: (1m18.508014476s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-b54dcfcfd-r59dz" [21111912-2a7f-4a34-ad31-fadfb6172156] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016424959s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-84b8578dc4-tbgh4" [3547d315-fcb6-4fd5-95a8-d1deb793c35c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.012239015s
helpers_test.go:175: Cleaning up "skaffold-961234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-961234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-961234: (1.177067544s)
--- PASS: TestSkaffold (140.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (228.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m7.663451949s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-569378
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-569378: (3.149079037s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-569378 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-569378 status --format={{.Host}}: exit status 7 (87.019206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m28.248002926s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-569378 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (99.940795ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-569378] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-569378
	    minikube start -p kubernetes-upgrade-569378 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5693782 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-569378 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-569378 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : (1m6.731055159s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-569378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-569378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-569378: (2.07459048s)
--- PASS: TestKubernetesUpgrade (228.12s)

                                                
                                    
x
+
TestPause/serial/Start (134.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-949418 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-949418 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m14.723337791s)
--- PASS: TestPause/serial/Start (134.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (77.220983ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-352997] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-78415/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-78415/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (127.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352997 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352997 --driver=kvm2 : (2m6.990695378s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-352997 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (127.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-949418 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-949418 --alsologtostderr -v=1 --driver=kvm2 : (1m6.669646685s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (66.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --driver=kvm2 : (31.30758524s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-352997 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-352997 status -o json: exit status 2 (265.615215ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-352997","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-352997
E1009 23:32:18.208275   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-352997: (1.050620341s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352997 --no-kubernetes --driver=kvm2 : (29.142974184s)
--- PASS: TestNoKubernetes/serial/Start (29.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-352997 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-352997 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.71908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-949418 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.017664499s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.54012423s)
--- PASS: TestNoKubernetes/serial/ProfileList (18.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-949418 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-949418 --output=json --layout=cluster: exit status 2 (301.509035ms)

                                                
                                                
-- stdout --
	{"Name":"pause-949418","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-949418","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-949418 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-949418 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-949418 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-949418 --alsologtostderr -v=5: (1.109173926s)
--- PASS: TestPause/serial/DeletePaused (1.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1009 23:33:00.834185   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.354554149s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-352997
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-352997: (2.452410196s)
--- PASS: TestNoKubernetes/serial/Stop (2.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (232.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3886526978.exe start -p stopped-upgrade-049998 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3886526978.exe start -p stopped-upgrade-049998 --memory=2200 --vm-driver=kvm2 : (2m9.90230417s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3886526978.exe -p stopped-upgrade-049998 stop
E1009 23:35:21.255911   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3886526978.exe -p stopped-upgrade-049998 stop: (13.092048403s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-049998 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1009 23:35:34.661049   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-049998 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m29.160284691s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (232.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352997 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352997 --driver=kvm2 : (25.686668875s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-352997 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-352997 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.067615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (193.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757458 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1009 23:36:26.186447   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757458 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (3m13.078034285s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (193.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-049998
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-049998: (1.400320699s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737489 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:37:18.209168   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737489 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (1m56.998911899s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-199813 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:38:00.834204   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:38:13.720477   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:13.725763   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:13.736134   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:13.756457   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:13.797359   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:13.877541   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:14.038007   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:14.358570   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:14.999213   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:16.279447   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:18.839931   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:23.960582   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:34.201025   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:38:54.682268   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-199813 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (1m33.201828282s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737489 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2af089b9-c00d-4d4c-9577-03bf112d0b7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2af089b9-c00d-4d4c-9577-03bf112d0b7e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.038033064s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-737489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-737489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.190837595s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-737489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-737489 --alsologtostderr -v=3
E1009 23:39:12.739463   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-737489 --alsologtostderr -v=3: (13.125014841s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-199813 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ed6824dd-c48b-4a40-980b-705a2e6030e0] Pending
helpers_test.go:344: "busybox" [ed6824dd-c48b-4a40-980b-705a2e6030e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ed6824dd-c48b-4a40-980b-705a2e6030e0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.03027178s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-199813 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-757458 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38da000f-f689-4780-92f6-a1bf6a85254b] Pending
helpers_test.go:344: "busybox" [38da000f-f689-4780-92f6-a1bf6a85254b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38da000f-f689-4780-92f6-a1bf6a85254b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.035202646s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-757458 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737489 -n no-preload-737489
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737489 -n no-preload-737489: exit status 7 (82.339677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-737489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-737489 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-737489 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (5m15.184696142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-737489 -n no-preload-737489
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-199813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-199813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092592224s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-199813 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-199813 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-199813 --alsologtostderr -v=3: (13.131303247s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-757458 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-757458 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-757458 --alsologtostderr -v=3
E1009 23:39:35.643342   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-757458 --alsologtostderr -v=3: (13.144647193s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199813 -n embed-certs-199813
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199813 -n embed-certs-199813: exit status 7 (93.502618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-199813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (323.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-199813 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:39:40.422545   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-199813 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (5m23.0164462s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199813 -n embed-certs-199813
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (323.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757458 -n old-k8s-version-757458
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757458 -n old-k8s-version-757458: exit status 7 (80.436166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-757458 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (485.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757458 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757458 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (8m5.186386152s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757458 -n old-k8s-version-757458
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (485.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-468042 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:40:57.563716   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:41:03.882947   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:41:26.186158   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
E1009 23:42:18.208770   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-468042 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (2m3.159485321s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (123.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-468042 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4041c74-51a2-486d-8924-1b150b909ec7] Pending
helpers_test.go:344: "busybox" [f4041c74-51a2-486d-8924-1b150b909ec7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4041c74-51a2-486d-8924-1b150b909ec7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.040544029s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-468042 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-468042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-468042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.154754613s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-468042 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-468042 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-468042 --alsologtostderr -v=3: (13.139904621s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042: exit status 7 (82.667085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-468042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-468042 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:43:00.834328   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/functional-964126/client.crt: no such file or directory
E1009 23:43:13.720721   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:43:41.404233   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
E1009 23:44:12.739264   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/skaffold-961234/client.crt: no such file or directory
E1009 23:44:29.232000   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-468042 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m40.782014816s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9px26" [29e97ae6-1985-4665-a152-d7b730e8d1fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023990842s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9px26" [29e97ae6-1985-4665-a152-d7b730e8d1fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010893057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-737489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-737489 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-737489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737489 -n no-preload-737489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737489 -n no-preload-737489: exit status 2 (269.301059ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737489 -n no-preload-737489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737489 -n no-preload-737489: exit status 2 (281.120706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-737489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-737489 -n no-preload-737489
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-737489 -n no-preload-737489
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (76.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m16.221558055s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (76.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lkskp" [f49fb3a2-a358-4792-944e-2530c034f9bf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018142529s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lkskp" [f49fb3a2-a358-4792-944e-2530c034f9bf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012732929s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-199813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-199813 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-199813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199813 -n embed-certs-199813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199813 -n embed-certs-199813: exit status 2 (257.988679ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199813 -n embed-certs-199813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199813 -n embed-certs-199813: exit status 2 (253.558158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-199813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199813 -n embed-certs-199813
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199813 -n embed-certs-199813
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m18.609908808s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-077416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-077416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.180411973s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-077416 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-077416 --alsologtostderr -v=3: (13.139940961s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077416 -n newest-cni-077416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077416 -n newest-cni-077416: exit status 7 (114.915791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-077416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (52.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1009 23:46:26.186071   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/ingress-addon-legacy-466192/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (52.397998213s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077416 -n newest-cni-077416
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (52.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hfv46" [9eec77cb-f0ee-45c5-b5f1-1d66fd48197c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hfv46" [9eec77cb-f0ee-45c5-b5f1-1d66fd48197c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.021101493s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m31.031036994s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-077416 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-077416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077416 -n newest-cni-077416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077416 -n newest-cni-077416: exit status 2 (288.326863ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077416 -n newest-cni-077416
E1009 23:47:18.208915   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077416 -n newest-cni-077416: exit status 2 (303.78203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-077416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077416 -n newest-cni-077416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077416 -n newest-cni-077416
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m33.110402919s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wshxl" [958d4255-4cd7-4a6d-9b0c-375d499c3a87] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023871533s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wshxl" [958d4255-4cd7-4a6d-9b0c-375d499c3a87] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013585999s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-757458 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-757458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757458 -n old-k8s-version-757458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757458 -n old-k8s-version-757458: exit status 2 (294.21255ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757458 -n old-k8s-version-757458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757458 -n old-k8s-version-757458: exit status 2 (284.310743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-757458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757458 -n old-k8s-version-757458
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757458 -n old-k8s-version-757458
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1009 23:48:13.720801   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/gvisor-823224/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m20.584637644s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vzlv8" [36e95810-7795-4ed1-9c79-7b139fef8c88] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021912038s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vzlv8" [36e95810-7795-4ed1-9c79-7b139fef8c88] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012472054s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-468042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-468042 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-468042 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042: exit status 2 (264.119875ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042: exit status 2 (258.171617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-468042 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468042 -n default-k8s-diff-port-468042
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)
E1009 23:51:35.417448   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.422760   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.433104   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.453438   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.493810   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.574167   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:35.735081   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:36.055723   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:36.696052   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:37.976975   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:40.537586   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
E1009 23:51:43.367004   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/no-preload-737489/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8748f" [3d49bb81-4371-43f1-9f23-3943204e61e0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026321774s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (83.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m23.514271693s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (83.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s95rb" [093ef74c-a26d-4bf8-a895-df3ac927c3f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s95rb" [093ef74c-a26d-4bf8-a895-df3ac927c3f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.019801094s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kpdbw" [cac2e5a2-fdcf-4829-bf3d-784b71ae708f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kpdbw" [cac2e5a2-fdcf-4829-bf3d-784b71ae708f] Running
E1009 23:49:02.083130   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/no-preload-737489/client.crt: no such file or directory
E1009 23:49:04.643762   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/no-preload-737489/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.017800149s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1009 23:49:20.005648   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/no-preload-737489/client.crt: no such file or directory
E1009 23:49:20.922941   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:20.928210   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:20.938507   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:20.958825   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:21.000044   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:21.080383   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:21.240755   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:21.561508   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:22.202002   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
E1009 23:49:23.482946   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m34.92328972s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (137.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m17.567148728s)
--- PASS: TestNetworkPlugins/group/calico/Start (137.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-75h2p" [db216a0f-5f4c-423b-9f02-d2bf359f9d13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 23:49:31.171377   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-75h2p" [db216a0f-5f4c-423b-9f02-d2bf359f9d13] Running
E1009 23:49:40.486462   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/no-preload-737489/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.024519521s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (110.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m50.668734063s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (110.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-516009 replace --force -f testdata/netcat-deployment.yaml
E1009 23:50:01.893192   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l56nb" [162000bf-51d7-4e43-8ddb-b1adb52cddde] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l56nb" [162000bf-51d7-4e43-8ddb-b1adb52cddde] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.013025833s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (93.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1009 23:50:42.854242   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/old-k8s-version-757458/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-516009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m33.603080114s)
--- PASS: TestNetworkPlugins/group/false/Start (93.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6fnr4" [21e9cc81-dbcb-493d-9c1f-b4347620d21c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030024797s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4b6g5" [60dce701-e29a-4de2-a2fe-b01ea89368c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4b6g5" [60dce701-e29a-4de2-a2fe-b01ea89368c8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.012557494s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4fw2k" [1c3a66ef-f375-4dfa-8e78-5f099f5b54bb] Running
E1009 23:51:45.658165   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029690947s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kbkmw" [42a23447-f8c2-471c-88ae-948d9d06524f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kbkmw" [42a23447-f8c2-471c-88ae-948d9d06524f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012265518s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6bzpz" [f45d12a5-63dd-4fb9-b311-ae7b4b8af617] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 23:51:55.899222   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6bzpz" [f45d12a5-63dd-4fb9-b311-ae7b4b8af617] Running
E1009 23:52:01.257113   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.012510713s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-516009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-516009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kvw57" [8bc84c80-8354-4fc3-bfa0-0c40e0f8069e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 23:52:16.379404   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/auto-516009/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-kvw57" [8bc84c80-8354-4fc3-bfa0-0c40e0f8069e] Running
E1009 23:52:18.208386   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/addons-229072/client.crt: no such file or directory
E1009 23:52:18.547098   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.552419   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.562705   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.582996   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.623365   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.703751   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:18.864009   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:19.184882   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
E1009 23:52:19.825230   85601 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-78415/.minikube/profiles/default-k8s-diff-port-468042/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.010347528s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-516009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-516009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-654841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-654841
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-516009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-516009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-516009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-516009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-516009"

                                                
                                                
----------------------- debugLogs end: cilium-516009 [took: 3.865453151s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-516009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-516009
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard